WEEK · 2025-W35

Weekly Digest — 2025-W35

145 unique stories (2025-08-252025-08-31), aggregated across 8 sources.

Hacker News(42)

  1. Google to require developer verification to install and sideload Android apps (9to5google.com)
  2. Meta just suspended the Facebook account of Neal Stephenson (twitter.com)
  3. Google's Liquid Cooling (chipsandcheese.com)
  4. Temporary suspension of acceptance of mail to the United States (www.post.japanpost.jp)
  5. A visual introduction to big O notation (samwho.dev)
  6. FCC bars providers for non-compliance with robocall protections (docs.fcc.gov)
  7. Claude for Chrome (www.anthropic.com)
  8. Michigan Supreme Court: Unrestricted phone searches violate Fourth Amendment (reclaimthenet.org)
  9. We regret but have to temporary suspend the shipments to USA (olimex.wordpress.com)
  10. Japan has opened its first osmotic power plant (www.theguardian.com)
  11. Proposal to Ban Ghost Jobs (www.cnbc.com)
  12. Show HN: A zoomable, searchable archive of BYTE magazine (byte.tsundoku.io)

GitHub Trending(27)

  1. plait-board / drawnix

    开源白板工具(SaaS),一体化白板,包含思维导图、流程图、自由画等。All in one open-source whiteboard tool with mind, flowchart, freehand and etc.

  2. HKUDS / DeepCode

    "DeepCode: Open Agentic Coding (Paper2Code & Text2Web & Text2Backend)"

  3. winapps-org / winapps

    Run Windows apps such as Microsoft Office/Adobe in Linux (Ubuntu/Fedora) and GNOME/KDE as if they were a part of the native OS, including Nautilus integration. Hard fork of https://github.com/Fmstrat/winapps/

  4. moeru-ai / airi

    💖🧸 Self hosted, you owned Grok Companion, a container of souls of waifu, cyber livings to bring them into our worlds, wishing to achieve Neuro-sama's altitude. Capable of realtime voice chat, Minecraft, Factorio playing. Web / macOS / Windows supported.

  5. HunxByts / GhostTrack

    Useful tool to track location or mobile number

  6. willccbb / verifiers

    Verifiers for LLM Reinforcement Learning

  7. asgeirtj / system_prompts_leaks

    Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini

  8. TheAlgorithms / Java

    All Algorithms implemented in Java

  9. MODSetter / SurfSense

    Open Source Alternative to NotebookLM / Perplexity, connected to external sources such as Search Engines, Slack, Linear, Jira, ClickUp, Confluence, Notion, YouTube, GitHub, Discord and more. Join our discord: https://discord.gg/ejRNvftDp9

  10. QuentinFuxa / WhisperLiveKit

    Python package for Real-time, Local Speech-to-Text and Speaker Diarization. FastAPI Server & Web Interface

  11. microsoft / terminal

    The new Windows Terminal and the original Windows console host, all in the same place!

  12. firecracker-microvm / firecracker

    Secure and fast microVMs for serverless computing.

Product Hunt(40)

  1. Trace

    Workflow Automations for the Human 👾 AI Workforce

  2. Qoder

    Qoder is an agentic IDE for real software development.

  3. TraceRoot.AI

    Fix bugs faster with open source, AI native observability

  4. Onlook for Web

    Open source Cursor for designers

  5. AI Elements by Vercel

    The shadcn/ui component library for building AI-native apps

  6. Tab With a View 2.0

    Open a tab, escape somewhere beautiful

  7. Creem 1.0

    Split SaaS revenue with partners, sell without headaches

  8. Draw A Fish

    Draw a fish, and watch it swim with the world

  9. Jotform Instagram Agent

    Auto-replies for Instagram DMs, comments, and stories

  10. Pikto AI Studio

    One AI suite to replace all your design tools

  11. Tasker Builder

    Build your idea from prompt to product to pipeline

  12. Graphite Chat

    The agentic code review experience.

Hugging Face(18)

  1. AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs

    In this paper, we introduce a novel learning paradigm for adaptive Large Language Model (LLM) agents that eliminates the need for fine-tuning the underlying LLMs. Existing approaches are often either rigid, relying on static, handcrafted reflection workflows, or computationally intensive, requiring gradient updates of LLM model parameters. In contrast, our method enables low-cost continual adaptation via memory-based online reinforcement learning. We formalise this as a Memory-augmented Markov Decision Process (M-MDP), equipped with a neural case-selection policy to guide action decisions. Past experiences are stored in an episodic memory, either differentiable or non-parametric. The policy is continually updated based on environmental feedback through a memory rewriting mechanism, whereas policy improvement is achieved through efficient memory reading (retrieval). We instantiate our agent model in the deep research setting, namely AgentFly, which attains top-1 on GAIA validation (87.88% Pass@3) and 79.40% on the test set. It reaches 66.6% F1 and 80.4% PM on the DeepResearcher dataset, outperforming the state-of-the-art training-based method, while case-based memory adds 4.7% to 9.6% absolute points on out-of-distribution tasks. Our approach offers a scalable and efficient pathway for developing generalist LLM agents capable of continuous, real-time learning without gradient updates, advancing machine learning towards open-ended skill acquisition and deep research scenarios. The code is available at https://github.com/Agent-on-the-Fly/AgentFly.

  2. ODYSSEY: Open-World Quadrupeds Exploration and Manipulation for Long-Horizon Tasks

    Language-guided long-horizon mobile manipulation has long been a grand challenge in embodied semantic reasoning, generalizable manipulation, and adaptive locomotion. Three fundamental limitations hinder progress: First, although large language models have improved spatial reasoning and task planning through semantic priors, existing implementations remain confined to tabletop scenarios, failing to address the constrained perception and limited actuation ranges of mobile platforms. Second, current manipulation strategies exhibit insufficient generalization when confronted with the diverse object configurations encountered in open-world environments. Third, while crucial for practical deployment, the dual requirement of maintaining high platform maneuverability alongside precise end-effector control in unstructured settings remains understudied. In this work, we present ODYSSEY, a unified mobile manipulation framework for agile quadruped robots equipped with manipulators, which seamlessly integrates high-level task planning with low-level whole-body control. To address the challenge of egocentric perception in language-conditioned tasks, we introduce a hierarchical planner powered by a vision-language model, enabling long-horizon instruction decomposition and precise action execution. At the control level, our novel whole-body policy achieves robust coordination across challenging terrains. We further present the first benchmark for long-horizon mobile manipulation, evaluating diverse indoor and outdoor scenarios. Through successful sim-to-real transfer, we demonstrate the system's generalization and robustness in real-world deployments, underscoring the practicality of legged manipulators in unstructured environments. Our work advances the feasibility of generalized robotic assistants capable of complex, dynamic tasks. Our project page: https://kaijwang.github.io/odyssey.github.io/

  3. Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR

    Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a key paradigm for post-training Large Language Models (LLMs), particularly for complex reasoning tasks. However, vanilla RLVR training has been shown to improve Pass@1 performance at the expense of policy entropy, leading to reduced generation diversity and limiting the Pass@k performance, which typically represents the upper bound of LLM reasoning capability. In this paper, we systematically analyze the policy's generation diversity from the perspective of training problems and find that augmenting and updating training problems helps mitigate entropy collapse during training. Based on these observations, we propose an online Self-play with Variational problem Synthesis (SvS) strategy for RLVR training, which uses the policy's correct solutions to synthesize variational problems while ensuring their reference answers remain identical to the originals. This self-improving strategy effectively maintains policy entropy during training and substantially improves Pass@k compared with standard RLVR, sustaining prolonged improvements and achieving absolute gains of 18.3% and 22.8% in Pass@32 performance on the competition-level AIME24 and AIME25 benchmarks. Experiments on 12 reasoning benchmarks across varying model sizes from 3B to 32B consistently demonstrate the generalizability and robustness of SvS.

  4. EgoTwin: Dreaming Body and View in First Person

    While exocentric video synthesis has achieved great progress, egocentric video generation remains largely underexplored, which requires modeling first-person view content along with camera motion patterns induced by the wearer's body movements. To bridge this gap, we introduce a novel task of joint egocentric video and human motion generation, characterized by two key challenges: 1) Viewpoint Alignment: the camera trajectory in the generated video must accurately align with the head trajectory derived from human motion; 2) Causal Interplay: the synthesized human motion must causally align with the observed visual dynamics across adjacent video frames. To address these challenges, we propose EgoTwin, a joint video-motion generation framework built on the diffusion transformer architecture. Specifically, EgoTwin introduces a head-centric motion representation that anchors the human motion to the head joint and incorporates a cybernetics-inspired interaction mechanism that explicitly captures the causal interplay between video and motion within attention operations. For comprehensive evaluation, we curate a large-scale real-world dataset of synchronized text-video-motion triplets and design novel metrics to assess video-motion consistency. Extensive experiments demonstrate the effectiveness of the EgoTwin framework.

  5. CRISP: Persistent Concept Unlearning via Sparse Autoencoders

    As large language models (LLMs) are increasingly deployed in real-world applications, the need to selectively remove unwanted knowledge while preserving model utility has become paramount. Recent work has explored sparse autoencoders (SAEs) to perform precise interventions on monosemantic features. However, most SAE-based methods operate at inference time, which does not create persistent changes in the model's parameters. Such interventions can be bypassed or reversed by malicious actors with parameter access. We introduce CRISP, a parameter-efficient method for persistent concept unlearning using SAEs. CRISP automatically identifies salient SAE features across multiple layers and suppresses their activations. We experiment with two LLMs and show that our method outperforms prior approaches on safety-critical unlearning tasks from the WMDP benchmark, successfully removing harmful knowledge while preserving general and in-domain capabilities. Feature-level analysis reveals that CRISP achieves semantically coherent separation between target and benign concepts, allowing precise suppression of the target features.

  6. Selective Contrastive Learning for Weakly Supervised Affordance Grounding

    Facilitating an entity's interaction with objects requires accurately identifying parts that afford specific actions. Weakly supervised affordance grounding (WSAG) seeks to imitate human learning from third-person demonstrations, where humans intuitively grasp functional parts without needing pixel-level annotations. To achieve this, grounding is typically learned using a shared classifier across images from different perspectives, along with distillation strategies incorporating part discovery process. However, since affordance-relevant parts are not always easily distinguishable, models primarily rely on classification, often focusing on common class-specific patterns that are unrelated to affordance. To address this limitation, we move beyond isolated part-level learning by introducing selective prototypical and pixel contrastive objectives that adaptively learn affordance-relevant cues at both the part and object levels, depending on the granularity of the available information. Initially, we find the action-associated objects in both egocentric (object-focused) and exocentric (third-person example) images by leveraging CLIP. Then, by cross-referencing the discovered objects of complementary views, we excavate the precise part-level affordance clues in each perspective. By consistently learning to distinguish affordance-relevant regions from affordance-irrelevant background context, our approach effectively shifts activation from irrelevant areas toward meaningful affordance cues. Experimental results demonstrate the effectiveness of our method. Codes are available at github.com/hynnsk/SelectiveCL.

  7. InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency

    We introduce InternVL 3.5, a new family of open-source multimodal models that significantly advances versatility, reasoning capability, and inference efficiency along the InternVL series. A key innovation is the Cascade Reinforcement Learning (Cascade RL) framework, which enhances reasoning through a two-stage process: offline RL for stable convergence and online RL for refined alignment. This coarse-to-fine training strategy leads to substantial improvements on downstream reasoning tasks, e.g., MMMU and MathVista. To optimize efficiency, we propose a Visual Resolution Router (ViR) that dynamically adjusts the resolution of visual tokens without compromising performance. Coupled with ViR, our Decoupled Vision-Language Deployment (DvD) strategy separates the vision encoder and language model across different GPUs, effectively balancing computational load. These contributions collectively enable InternVL3.5 to achieve up to a +16.0\% gain in overall reasoning performance and a 4.05times inference speedup compared to its predecessor, i.e., InternVL3. In addition, InternVL3.5 supports novel capabilities such as GUI interaction and embodied agency. Notably, our largest model, i.e., InternVL3.5-241B-A28B, attains state-of-the-art results among open-source MLLMs across general multimodal, reasoning, text, and agentic tasks -- narrowing the performance gap with leading commercial models like GPT-5. All models and code are publicly released.

  8. Visual-CoG: Stage-Aware Reinforcement Learning with Chain of Guidance for Text-to-Image Generation

    Despite the promising progress of recent autoregressive models in text-to-image (T2I) generation, their ability to handle multi-attribute and ambiguous prompts remains limited. To address these limitations, existing works have applied chain-of-thought (CoT) to enable stage-aware visual synthesis and employed reinforcement learning (RL) to improve reasoning capabilities. However, most models provide reward signals only at the end of the generation stage. This monolithic final-only guidance makes it difficult to identify which stages contribute positively to the final outcome and may lead to suboptimal policies. To tackle this issue, we propose a Visual-Chain of Guidance (Visual-CoG) paradigm consisting of three stages: semantic reasoning, process refining, and outcome evaluation, with stage-aware rewards providing immediate guidance throughout the image generation pipeline. We further construct a visual cognition benchmark, VisCog-Bench, which comprises four subtasks to evaluate the effectiveness of semantic reasoning. Comprehensive evaluations on GenEval, T2I-CompBench, and the proposed VisCog-Bench show improvements of 15%, 5%, and 19%, respectively, demonstrating the superior performance of the proposed Visual-CoG. We will release all the resources soon.

  9. MV-RAG: Retrieval Augmented Multiview Diffusion

    Text-to-3D generation approaches have advanced significantly by leveraging pretrained 2D diffusion priors, producing high-quality and 3D-consistent outputs. However, they often fail to produce out-of-domain (OOD) or rare concepts, yielding inconsistent or inaccurate results. To this end, we propose MV-RAG, a novel text-to-3D pipeline that first retrieves relevant 2D images from a large in-the-wild 2D database and then conditions a multiview diffusion model on these images to synthesize consistent and accurate multiview outputs. Training such a retrieval-conditioned model is achieved via a novel hybrid strategy bridging structured multiview data and diverse 2D image collections. This involves training on multiview data using augmented conditioning views that simulate retrieval variance for view-specific reconstruction, alongside training on sets of retrieved real-world 2D images using a distinctive held-out view prediction objective: the model predicts the held-out view from the other views to infer 3D consistency from 2D data. To facilitate a rigorous OOD evaluation, we introduce a new collection of challenging OOD prompts. Experiments against state-of-the-art text-to-3D, image-to-3D, and personalization baselines show that our approach significantly improves 3D consistency, photorealism, and text adherence for OOD/rare concepts, while maintaining competitive performance on standard benchmarks.

  10. T2I-ReasonBench: Benchmarking Reasoning-Informed Text-to-Image Generation

    We propose T2I-ReasonBench, a benchmark evaluating reasoning capabilities of text-to-image (T2I) models. It consists of four dimensions: Idiom Interpretation, Textual Image Design, Entity-Reasoning and Scientific-Reasoning. We propose a two-stage evaluation protocol to assess the reasoning accuracy and image quality. We benchmark various T2I generation models, and provide comprehensive analysis on their performances.

  11. Breaking the Exploration Bottleneck: Rubric-Scaffolded Reinforcement Learning for General LLM Reasoning

    Recent advances in Large Language Models (LLMs) have underscored the potential of Reinforcement Learning (RL) to facilitate the emergence of reasoning capabilities. Despite the encouraging results, a fundamental dilemma persists as RL improvement relies on learning from high-quality samples, yet the exploration for such samples remains bounded by the inherent limitations of LLMs. This, in effect, creates an undesirable cycle in which what cannot be explored cannot be learned. In this work, we propose Rubric-Scaffolded Reinforcement Learning (RuscaRL), a novel instructional scaffolding framework designed to break the exploration bottleneck for general LLM reasoning. Specifically, RuscaRL introduces checklist-style rubrics as (1) explicit scaffolding for exploration during rollout generation, where different rubrics are provided as external guidance within task instructions to steer diverse high-quality responses. This guidance is gradually decayed over time, encouraging the model to internalize the underlying reasoning patterns; (2) verifiable rewards for exploitation during model training, where we can obtain robust LLM-as-a-Judge scores using rubrics as references, enabling effective RL on general reasoning tasks. Extensive experiments demonstrate the superiority of the proposed RuscaRL across various benchmarks, effectively expanding reasoning boundaries under the best-of-N evaluation. Notably, RuscaRL significantly boosts Qwen-2.5-7B-Instruct from 23.6 to 50.3 on HealthBench-500, surpassing GPT-4.1. Furthermore, our fine-tuned variant on Qwen3-30B-A3B-Instruct achieves 61.1 on HealthBench-500, outperforming leading LLMs including OpenAI-o3.

  12. Beyond Memorization: Extending Reasoning Depth with Recurrence, Memory and Test-Time Compute Scaling

    Reasoning is a core capability of large language models, yet understanding how they learn and perform multi-step reasoning remains an open problem. In this study, we explore how different architectures and training methods affect model multi-step reasoning capabilities within a cellular automata framework. By training on state sequences generated with random Boolean functions for random initial conditions to exclude memorization, we demonstrate that most neural architectures learn to abstract the underlying rules. While models achieve high accuracy in next-state prediction, their performance declines sharply if multi-step reasoning is required. We confirm that increasing model depth plays a crucial role for sequential computations. We demonstrate that an extension of the effective model depth with recurrence, memory, and test-time compute scaling substantially enhances reasoning capabilities.

Solidot(18)

  1. 谷神星可能曾经宜居

    谷神星是小行星带最大的天体,NASA 黎明号(Dawn)探测器在 2015 年进入谷神星的公转轨道,为科学家提供小行星的近距离的观测资料。研究人员通过重建谷神星内部热模型和化学模型,模拟了谷神星内部温度和成分随时间的变化。他们发现大约 25 亿年前谷神星内部放射性元素衰变产生的热能不仅足以使地下水库存在,还能不断供应热水给地下水库。而热水中含有溶解的气体,气体便从岩石核心的变质岩中向上流动,相当类似地球深海的海底热泉。研究估计谷神星最有可能的宜居时期是它形成后的 5 亿到 20 亿年之间,也就是大约 25 亿到 40 亿年前。尽管谷神星以前对于微生物来说可能很宜居,现在的谷神星早已耗尽了它的热能。不但水大量结冰,残留的液体也已经变成了浓缩的盐水。

  2. 小肯尼迪要求撤回一篇疫苗研究论文,期刊拒绝

    反疫苗的美国卫生部长小肯尼迪(Robert F. Kennedy Jr)要求《Annals of Internal Medicine》期刊撤回丹麦研究人员发表的一篇论文《Aluminum-Adsorbed Vaccines and Chronic Diseases in Childhood: A Nationwide Cohort Study》,对丹麦过去逾 20年来出生的 120 万名儿童的分析发现,疫苗中的铝化合物并不显著增加罹患自身免疫性、过敏性或神经发育障碍的风险。小肯尼迪对研究结论提出了质疑。疫苗怀疑论者曾宣称铝化合物 与自闭症等儿童疾病发病率上升有关,而 WHO 等机构早就驳斥过此类观点。以盐形式存在的铝广泛用于疫苗,没有证据表明疫苗中少量的铝会引起严重的副作用。对于小肯尼迪的要求,期刊表示无意撤稿。Retraction Watch 联合创始人 Ivan Oransky 指出,公共卫生官员很少要求撤稿,小肯尼迪此举是想要科学期刊屈服于他的意志。

  3. 新西兰空管系统因软件故障罢工一小时

    新西兰空管系统上周末因软件故障罢工一小时,干扰了机场的正常运作,有五架飞机在空中盘旋,四架飞机无法起飞。新西兰唯一的空管服务商 Airways 表示问题是因为软件故障导致飞行数据无法在系统之间传输。Airways CEO James Young 表示,在发现问题之后,空中交通管制员立即采取了措施,飞机要么在地面等待,要么在空中等待。Airways 的空管系统有备份系统,但 Young 表示无法即时切换到备份系统,验证飞行信息数据需要时间。故障持续了大约一个小时,期间在空中盘旋的飞机有两架继续飞行,三架重返了起飞地。

  4. 张益唐称他因为政治气候从美国回到中国

    数学家张益唐称他是因为政治气候从美国回到中国。张益唐是在今年六月离开加州圣巴巴拉,受聘于中山大学香港高等研究院,在大湾区定居和工作。张证明了存在无穷多对间隙小于 7000 万的相邻素数对,在数学史上第一次实质性推进解决著名数论难题“孪生素数猜想”,并在与黎曼猜想有关的朗道-西格尔零点猜想上取得重要进展。他表示有很多华裔学者和教授回到了中国。他说自己所处的数学领域没有受到多少政治气候的影响,但计算机、芯片或任何与军工相关的研究人员需要小心。他称,数学,尤其是理论数学,一大优势是展开研究不必局限于特定地点。

  5. 为前雇主 IT 系统设立关闭开关的开发者被判四年

    被裁前在前雇主 IT 系统植入恶意程序和设立关闭开关的开发者 Davis Lu 被判四年监禁以及三年的监督释放。美国司法部称,2018 年 Davis Lu 任职的 Eaton Corporation 进行了重组,他遭到了降级。他随后在公司 Windows 生产环境中植入恶意代码进行报复。该恶意程序包含了一个无限的 Java 线程循环,旨在拖垮服务器,导致生产系统崩溃。Lu 还创建了一个过于明显的关闭开关:IsDLEnabledinAD ("Is Davis Lu enabled in Active Directory") ,当 Active Directory 中他的账户被禁用,关闭开关将会激活禁用所有用户的账户。2019 年 9 月 9 日,Lu 的雇佣关系终止,账户被禁用后关闭开关激活,数千名用户被锁定在系统外。此事导致雇主损失了数十万美元。在 Lu 被要求上缴公司发的笔记本电脑前,他删除了其中的加密数据。调查人员后来从设备上发现了他的搜索查询记录,包括搜寻如何提权,隐藏进程以及快速删除文件。

  6. OpenAI 用 Google 搜索数据挑战 Google

    OpenAI 正致力于取代 Google,而它依赖的搜索数据却来自搜索巨人。Theinformation 报道,OpenAI 通过使用从 Web 抓取的 Google 搜索数据去增强聊天机器人 ChatGPT 的响应能力。当用户通过 ChatGPT 查询时事如新闻、体育和股市时,Google 搜索数据能提供巨大的帮助。OpenAI 使用的数据来自 Web 抓取公司 SerpApi。去年 SerpApi 还在网站上列出 OpenAI 是其客户,但后来将其删除了。

  7. 马斯克的 xAI 起诉苹果和 OpenAI 阻碍竞争

    马斯克(Elon Musk')旗下 AI 初创公司 xAI 起诉苹果和 OpenAI,指控两家公司非法合谋阻碍 AI 领域的竞争。由于 xAI 的 AI 聊天机器人 Grok 在苹果 App Store 应用排行榜中排名低于 OpenAI 的 ChatGPT,马斯克此前通过社交媒体公开抨击了苹果和 OpenAI。xAI 在德州联邦法庭起诉苹果和 OpenAI 合谋打压 AI 领域的竞争对手。OpenAI 发言人表示此举符合马斯克一贯的骚扰模式。

  8. 暴露在热浪下会加速衰老

    根据发表在《nature climate change》期刊上的一项研究,暴露在热浪下会加速衰老。研究人员分析了 24,922 名台湾成年人 15 年(2008–2022)的健康数据,发现暴露在两年的热浪下可能会加速生物衰老 8-12 天。论文主要作者、香港大学助理教授 Cui Guo 表示,数字虽小但意义重大,因为全世界的热浪已经持续了几十年。今天世界各地都面临创记录的高温。

  9. 英特尔警告美国政府控股可能引发负面反应

    英特尔周一警告,美国政府控制 10% 股份可能会在其它国家引发负面反应。美国政府上周与英特尔达成协议,将 88.7 亿美元的联邦芯片补贴变为投资,换取芯片巨人 10% 股份。然而英特尔 76% 的收入来自国际市场,其中中国的收入占到了英特尔总收入的 29%。英特尔在递交到证券监管机构的证券申报文件中警告,在美国政府持有部分股份之后,外国政府可能会英特尔施加额外监管,可能会阻止其它国家向其提供补贴,此举也可能限制其战略灵活性。

  10. 苹果指控前雇员为 Oppo 窃取智能手表的商业机密

    苹果指控前 Apple Watch 员工 Chen Shi 博士为新雇主 Oppo 窃取了其智能手表的商业机密,而 Oppo 则否认有任何不当行为。苹果在诉讼文件中称,加盟 Oppo 之前 Chen Shi 参加了数十场 Apple Watch 团队技术人员的会议,了解其工作,并从一个受保护的 Box 文件夹中下载了 63 份文档,传输到 U 盘。Chen Shi 曾向 Oppo 发送消息表示自己正在尽可能的收集信息。离职前他使用苹果发的 MacBook 笔记本电脑搜索“如何清除 MacBook 数据”,“他人能否看到我打开了共享驱动器上的文件?”等关键词。Chen Shi 担任过苹果公司传感器系统架构师,目前领导着 Oppo 的一个传感技术团队。在给苹果的辞职信中,Shi 表示他是因为个人和家庭原因而离职。通过提供给 Shi 的工作用 iPhone 手机上留下的信息,苹果发现 Oppo 鼓励 Shi 收集苹果的商业机密。

  11. Google 将从明年屏蔽未验证开发者的 Android 应用的侧载

    Android 正以安全的名义变得日益封闭。Google 宣布将验证所有 Android 应用开发者的身份,不再局限于在 Play Store 应用商店发布应用的开发者。从明年开始,Google 将屏蔽未经身份验证的开发者的 Android 应用的侧载(sideload)。Google 给出的理由是安全。Google 称,它从 2023 年要起求所有 Google Play 应用开发者验证身份,此后恶意软件和欺诈大幅下降。于是搜索巨人得出一个结论:验证所有开发者的身份将有助于增强 Android 生态系统的安全性。

  12. Twitch 打击机器人账号,部分频道的观看者减少了一半

    Twitch 打击机器人账号,部分主播频道的观看者减少了一半甚至更多。TwitchTracker 和 StreamsCharts 等的分析显示,知名主播如 Tectone 和 LydiaViolet 的观看人数约减少了一半,Asmongold 的观看人数比平时减少了 1.5 万到 2 万。Mira 的观看人数从平时的 2,000 多人降至 150-200 人。YourRageGaming 的观看人数从 20,000-30,000 的峰值降至 4,000-7,000 人。Agent00、Lacy 和 Plaqueboymax 都出现了类似程度的下滑。机器人刷屏(viewbotting)多年来一直困扰着 Twitch。机器人账号会人为夸大观看人数以提高频道排名。更高的排名意味着更多真实观众会发现该直播频道。更多的观众意味着更好的赞助协议和更高的广告费率。经济动机显而易见。