DIGEST · 2026-02-21

OrangeBot.AI Digest — 2026-02-21

51 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Why is Claude an Electron App? (www.dbreunig.com)
  2. EDuke32 – Duke Nukem 3D (Open-Source) (www.eduke32.com)
  3. Cloudflare outage on February 20, 2026 (blog.cloudflare.com)
  4. Claws are now a new layer on top of LLM agents (twitter.com)
  5. What not to write on your security clearance form (1988) (milk.com)
  6. How far back in time can you understand English? (www.deadlanguagesociety.com)
  7. CXMT has been offering DDR4 chips at about half the prevailing market rate (www.koreaherald.com)
  8. macOS's Little-Known Command-Line Sandboxing Tool (2025) (igorstechnoclub.com)
  9. AI uBlock Blacklist (github.com)
  10. Andrej Karpathy talks about "Claws" (simonwillison.net)
  11. LibreOffice blasts OnlyOffice for working with Microsoft to lock users in (www.neowin.net)
  12. I verified my LinkedIn identity. Here's what I handed over (thelocalstack.eu)
  13. EU mandates replaceable batteries by 2027 (2023) (environment.ec.europa.eu)
  14. Acme Weather (acmeweather.com)
  15. Lean 4: How the theorem prover works and why it's the new competitive edge in AI (venturebeat.com)

GitHub Trending(13)

  1. vxcontrol / pentagi

    ✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks

  2. abhigyanpatwari / GitNexus

    GitNexus: The Zero-Server Code Intelligence Engine - GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration

  3. obra / superpowers

    An agentic skills framework & software development methodology that works.

  4. huggingface / skills
  5. PowerShell / PowerShell

    PowerShell for every system!

  6. anthropics / claude-code

    Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.

  7. stan-smith / FossFLOW

    Make beautiful isometric infrastructure diagrams

  8. ggml-org / ggml

    Tensor library for machine learning

  9. Stremio / stremio-web

    Stremio - Freedom to Stream

  10. HandsOnLLM / Hands-On-Large-Language-Models

    Official code repo for the O'Reilly Book - "Hands-On Large Language Models"

  11. RichardAtCT / claude-code-telegram

    A powerful Telegram bot that provides remote access to Claude Code, enabling developers to interact with their projects from anywhere with full AI assistance and session persistence.

  12. cloudflare / agents

    Build and deploy AI Agents on Cloudflare

  13. hiddify / hiddify-app

    Multi-platform auto-proxy client, supporting Sing-box, X-ray, TUIC, Hysteria, Reality, Trojan, SSH etc. It’s an open-source, secure and ad-free.

Hugging Face(15)

  1. SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

    Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.

  2. Mobile-Agent-v3.5: Multi-platform Fundamental GUI Agents

    The paper introduces GUI-Owl-1.5, the latest native GUI agent model that features instruct/thinking variants in multiple sizes (2B/4B/8B/32B/235B) and supports a range of platforms (desktop, mobile, browser, and more) to enable cloud-edge collaboration and real-time interaction. GUI-Owl-1.5 achieves state-of-the-art results on more than 20+ GUI benchmarks on open-source models: (1) on GUI automation tasks, it obtains 56.5 on OSWorld, 71.6 on AndroidWorld, and 48.4 on WebArena; (2) on grounding tasks, it obtains 80.3 on ScreenSpotPro; (3) on tool-calling tasks, it obtains 47.6 on OSWorld-MCP, and 46.8 on MobileWorld; (4) on memory and knowledge tasks, it obtains 75.5 on GUI-Knowledge Bench. GUI-Owl-1.5 incorporates several key innovations: (1) Hybird Data Flywheel: we construct the data pipeline for UI understanding and trajectory generation based on a combination of simulated environments and cloud-based sandbox environments, in order to improve the efficiency and quality of data collection. (2) Unified Enhancement of Agent Capabilities: we use a unified thought-synthesis pipeline to enhance the model's reasoning capabilities, while placing particular emphasis on improving key agent abilities, including Tool/MCP use, memory and multi-agent adaptation; (3) Multi-platform Environment RL Scaling: We propose a new environment RL algorithm, MRPO, to address the challenges of multi-platform conflicts and the low training efficiency of long-horizon tasks. The GUI-Owl-1.5 models are open-sourced, and an online cloud-sandbox demo is available at https://github.com/X-PLUG/MobileAgent.

  3. Unified Latents (UL): How to train your latents

    We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model. By linking the encoder's output noise to the prior's minimum noise level, we obtain a simple training objective that provides a tight upper bound on the latent bitrate. On ImageNet-512, our approach achieves competitive FID of 1.4, with high reconstruction quality (PSNR) while requiring fewer training FLOPs than models trained on Stable Diffusion latents. On Kinetics-600, we set a new state-of-the-art FVD of 1.3.

  4. Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5

    To understand and identify the unprecedented risks posed by rapidly advancing artificial intelligence (AI) models, Frontier AI Risk Management Framework in Practice presents a comprehensive assessment of their frontier risks. As Large Language Models (LLMs) general capabilities rapidly evolve and the proliferation of agentic AI, this version of the risk analysis technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R\&D, and self-replication. Specifically, we introduce more complex scenarios for cyber offense. For persuasion and manipulation, we evaluate the risk of LLM-to-LLM persuasion on newly released LLMs. For strategic deception and scheming, we add the new experiment with respect to emergent misalignment. For uncontrolled AI R\&D, we focus on the ``mis-evolution'' of agents as they autonomously expand their memory substrates and toolsets. Besides, we also monitor and evaluate the safety performance of OpenClaw during the interaction on the Moltbook. For self-replication, we introduce a new resource-constrained scenario. More importantly, we propose and validate a series of robust mitigation strategies to address these emerging threats, providing a preliminary technical and actionable pathway for the secure deployment of frontier AI. This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.

  5. Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

    LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

  6. "What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing

    Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.

  7. Arcee Trinity Large Technical Report

    We present the technical report for Arcee Trinity Large, a sparse Mixture-of-Experts model with 400B total parameters and 13B activated per token. Additionally, we report on Trinity Nano and Trinity Mini, with Trinity Nano having 6B total parameters with 1B activated per token, Trinity Mini having 26B total parameters with 3B activated per token. The models' modern architecture includes interleaved local and global attention, gated attention, depth-scaled sandwich norm, and sigmoid routing for Mixture-of-Experts. For Trinity Large, we also introduce a new MoE load balancing strategy titled Soft-clamped Momentum Expert Bias Updates (SMEBU). We train the models using the Muon optimizer. All three models completed training with zero loss spikes. Trinity Nano and Trinity Mini were pre-trained on 10 trillion tokens, and Trinity Large was pre-trained on 17 trillion tokens. The model checkpoints are available at https://huggingface.co/arcee-ai.

  8. DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers

    Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to 3.52times and 3.2times speedup on FLUX-1.Dev and Wan 2.1, respectively, without compromising the generation quality and prompt adherence.

  9. TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

    Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).

  10. Computer-Using World Model

    Agents operating in complex software environments benefit from reasoning about the consequences of their actions, as even a single incorrect user interface (UI) operation can derail long, artifact-preserving workflows. This challenge is particularly acute for computer-using scenarios, where real execution does not support counterfactual exploration, making large-scale trial-and-error learning and planning impractical despite the environment being fully digital and deterministic. We introduce the Computer-Using World Model (CUWM), a world model for desktop software that predicts the next UI state given the current state and a candidate action. CUWM adopts a two-stage factorization of UI dynamics: it first predicts a textual description of agent-relevant state changes, and then realizes these changes visually to synthesize the next screenshot. CUWM is trained on offline UI transitions collected from agents interacting with real Microsoft Office applications, and further refined with a lightweight reinforcement learning stage that aligns textual transition predictions with the structural requirements of computer-using environments. We evaluate CUWM via test-time action search, where a frozen agent uses the world model to simulate and compare candidate actions before execution. Across a range of Office tasks, world-model-guided test-time scaling improves decision quality and execution robustness.

  11. On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking

    We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task. Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics. While prior work has identified that individual neurons learn single-frequency Fourier features and phase alignment, it does not fully explain how these features combine into a global solution. We bridge this gap by formalizing a diversification condition that emerges during training when overparametrized, consisting of two parts: phase symmetry and frequency diversification. We prove that these properties allow the network to collectively approximate a flawed indicator function on the correct logic for the modular addition task. While individual neurons produce noisy signals, the phase symmetry enables a majority-voting scheme that cancels out noise, allowing the network to robustly identify the correct sum. Furthermore, we explain the emergence of these features under random initialization via a lottery ticket mechanism. Our gradient flow analysis proves that frequencies compete within each neuron, with the "winner" determined by its initial spectral magnitude and phase alignment. From a technical standpoint, we provide a rigorous characterization of the layer-wise phase coupling dynamics and formalize the competitive landscape using the ODE comparison lemma. Finally, we use these insights to demystify grokking, characterizing it as a three-stage process involving memorization followed by two generalization phases, driven by the competition between loss minimization and weight decay.

  12. ArXiv-to-Model: A Practical Study of Scientific LM Training

    While frontier large language models demonstrate strong reasoning and mathematical capabilities, the practical process of training domain-specialized scientific language models from raw sources remains under-documented. In this work, we present a detailed case study of training a 1.36B-parameter scientific language model directly from raw arXiv LaTeX sources spanning mathematics, computer science, and theoretical physics. We describe an end-to-end pipeline covering metadata filtering, archive validation, LaTeX extraction, text normalization, domain-aware tokenization, and dense transformer training under constrained compute (2xA100 GPUs). Through 24 experimental runs, we analyze training stability, scaling behavior, data yield losses, and infrastructure bottlenecks. Our findings highlight how preprocessing decisions significantly affect usable token volume, how tokenization impacts symbolic stability, and how storage and I/O constraints can rival compute as limiting factors. We further analyze convergence dynamics and show stable training behavior in a data-rich regime (52B pretraining tokens). Rather than proposing a novel architecture, this work provides an engineering-grounded, transparent account of training a small scientific language model from scratch. We hope these insights support researchers operating under moderate compute budgets who seek to build domain-specialized models.

  13. Discovering Multiagent Learning Algorithms with Large Language Models

    Much of the advancement of Multi-Agent Reinforcement Learning (MARL) in imperfect-information games has historically depended on manual iterative refinement of baselines. While foundational families like Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO) rest on solid theoretical ground, the design of their most effective variants often relies on human intuition to navigate a vast algorithmic design space. In this work, we propose the use of AlphaEvolve, an evolutionary coding agent powered by large language models, to automatically discover new multiagent learning algorithms. We demonstrate the generality of this framework by evolving novel variants for two distinct paradigms of game-theoretic learning. First, in the domain of iterative regret minimization, we evolve the logic governing regret accumulation and policy derivation, discovering a new algorithm, Volatility-Adaptive Discounted (VAD-)CFR. VAD-CFR employs novel, non-intuitive mechanisms-including volatility-sensitive discounting, consistency-enforced optimism, and a hard warm-start policy accumulation schedule-to outperform state-of-the-art baselines like Discounted Predictive CFR+. Second, in the regime of population based training algorithms, we evolve training-time and evaluation-time meta strategy solvers for PSRO, discovering a new variant, Smoothed Hybrid Optimistic Regret (SHOR-)PSRO. SHOR-PSRO introduces a hybrid meta-solver that linearly blends Optimistic Regret Matching with a smoothed, temperature-controlled distribution over best pure strategies. By dynamically annealing this blending factor and diversity bonuses during training, the algorithm automates the transition from population diversity to rigorous equilibrium finding, yielding superior empirical convergence compared to standard static meta-solvers.

  14. 2Mamba2Furious: Linear in Complexity, Competitive in Accuracy

    Linear attention transformers have become a strong alternative to softmax attention due to their efficiency. However, linear attention tends to be less expressive and results in reduced accuracy compared to softmax attention. To bridge the accuracy gap between softmax attention and linear attention, we manipulate Mamba-2, a very strong linear attention variant. We first simplify Mamba-2 down to its most fundamental and important components, evaluating which specific choices make it most accurate. From this simplified Mamba variant (Mamba-2S), we improve the A-mask and increase the order of the hidden state, resulting in a method, which we call 2Mamba, that is nearly as accurate as softmax attention, yet much more memory efficient for long context lengths. We also investigate elements to Mamba-2 that help surpass softmax attention accuracy. Code is provided for all our experiments

  15. FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment

    Enabling VLA models to predict environmental dynamics, known as world modeling, has been recognized as essential for improving robotic reasoning and generalization. However, current approaches face two main issues: 1. The training objective forces models to over-emphasize pixel-level reconstruction, which constrains semantic learning and generalization 2. Reliance on predicted future observations during inference often leads to error accumulation. To address these challenges, we introduce Future Representation Alignment via Parallel Progressive Expansion (FRAPPE). Our method adopts a two-stage fine-tuning strategy: In the mid-training phase, the model learns to predict the latent representations of future observations; In the post-training phase, we expand the computational workload in parallel and align the representation simultaneously with multiple different visual foundation models. By significantly improving fine-tuning efficiency and reducing dependence on action-annotated data, FRAPPE provides a scalable and data-efficient pathway to enhance world-awareness in generalist robotic policies. Experiments on the RoboTwin benchmark and real-world tasks demonstrate that FRAPPE outperforms state-of-the-art approaches and shows strong generalization in long-horizon and unseen scenarios.

Solidot(8)

  1. 特朗普将下令公开外星人和 UFO 相关文件

    美国总统特朗普(Donald Trump)表示他将命令包括国防部在内的机构公开外星人和 UFO 相关文件。此举被广泛视为是转移公众对爱泼斯坦文件(Epstein files)的注意力。特朗普表示他不知道外星人是否真实存在。上周美国前总统奥巴马(Barack Obama)接受 Brian Tyler Cohen 采访时表示他相信外星人真实存在,但他没有见过外星人,外星人也没有关在 51 区,51 区也不存在秘密的地下设施,除非真的存在某种巨大阴谋,阴谋者甚至向美国总统隐瞒了外星人的存在。

  2. FBI 线人协助运营了暗网毒品市场 Incognito

    在暗网毒品市场 Incognito 管理员 Lin Rui-Siang 的量刑听证会上,他的辩护律师首次披露,Incognito 的另一名管理员是 FBI 的线人,而这位管理员负责了 Incognito 大部分的毒品交易,Lin 坚称他主要负责代码和网站的技术基础设施。Lin 在狱中接受采访时表示,未披露名字的 FBI 线人是 Incognito 的正式合伙人,负责了网站大部分的管理工作,包括解决纠纷,允许哪些卖家出售毒品哪些卖家将被剔除。Lin 称是 FBI 在运营这个毒品交易网站。检方则否认该线人是 Lin 的同级,称线人是其下属不具有平等关系,否认政府运营该网站。

  3. Mozilla 建议 Windows 7/8/8.1 用户切换到 Linux

    Mozilla 称,2023 年 7 月释出的 Firefox 115 ESR 是 Firefox 支持 Windows 7/8/8.1 的最后一个版本,它将在 2026 年 2 月底为 Firefox 115 ESR 释出最后一个安全更新,之后将停止对它的支持。它建议 Windows 7/8/8.1 用户升级到 Windows 10 或更新版本,如果他们的 PC 由于微软设定的限制无法升级到 Windows 10 或 11,Mozilla 建议用户切换到 Linux 发行版。

  4. 家猫的癌症基因组

    癌症是家猫常见死因,但家猫的癌症基因组却知之甚少。根据本周《科学》期刊上的一项研究,科学家对 13 种不同类型猫科癌症的 493 份样本以及相匹配的健康对照组织进行了癌症基因测序。家猫与人类不仅有着相同的生活环境,且常与人类主人罹患相同的非癌合并症(如糖尿病),这使得家猫成为肿瘤研究中重要但未尽其用的资源。研究人员在将近 1000 个人类癌症基因与其对应的猫科动物基因进行比对后发现,这两个物种中存在一些流行程度相似的致癌基因如 TP53。研究人员还在猫的致癌基因组中发现了癌症驱动基因、肿瘤易感基因以及存在某些病毒序列的证据。

  5. Fedora 项目对叙利亚解除 IP 封禁

    Fedora Infrastructure Team 团队上周解除了对叙利亚 IP 地址范围块的封禁,允许叙利亚网民下载 Fedora Linux 发行版的镜像,恢复了他们对 Fedora Linux RPM 库、Fedora Account System 和 Fedora 构建系统的访问。叙利亚用户访问 Fedora 项目不再受到限制。特朗普政府去年解除了对叙利亚的制裁,美国商务部随后也放宽了对叙利亚的出口管制政策,Fedora 项目此举就是对出口管制政策变化的回应。

  6. 美国停止资助互联网自由项目

    过去十年美国向“互联网自由(Internet Freedom)”项目资助了逾 10 亿美元,但特朗普上台之后,相关项目几乎立即停止了拨款。作为政府大规模裁员的一部分,项目的资深员工在 2025 年辞职或被解雇,许多子项目永久取消。负责管理半数项目资金的非盈利组织 Open Technology Fund(OTF)去年 12 月赢得了对政府的诉讼,但特朗普政府正对裁决提起上诉。特朗普政府也在今年 1 月退出了捍卫数字权利的联盟 Freedom Online Coalition。通过互联网自由项目获得拨款的流行工具包括了加密通讯服务 Signal 和 Tor 浏览器,以及其它规避审查的工具和技术。

  7. 99% 的 40 岁以上成年人存在一处肩袖异常

    根据发表在《JAMA Internal Medicine》期刊上的一项研究,MRI 成像显示 99% 的 40 岁以上成年人存在一处肩袖异常,研究人员认为如此高比例的人存在相似情况,那么这种情况不应该视为异常而应该视为无需治疗的正常情况。肩袖是稳定肩膀且允许肩关节广泛活动的一群肌肉及其肌腱。有 602 名年龄在 41-76 岁的参与者完成了研究,82% 的人报告没有肩部症状,18% 报告有症状。MRI 成像显示 595 人(99%)至少存在一处肩袖异常,最常见的异常是部分撕裂(62%),其次是肌腱病(25%)和全厚度撕裂(11%),男女比例类似,其中全厚度撕裂 45 岁以下参与者没有发现,70-76 岁人群比例最高。

  8. Google 的新手机 Pixel 10a 基本上就是去年的 Pixel 9a

    Google 推出了新中端手机 Pixel 10a,其配置和 500 美元价格与去年推出的 Pixel 9a 基本相同,最显著的变化是摄像头不再凸起了,手机可以在桌子上滑来滑去了。Pixel 10a 显示屏分辨率和 Pixel 9a 相同,保护玻璃和最高亮度略有提升,处理器仍然是 Tensor G4 而不是其它 Pixel 10 系列采用的新一代 Tensor G5 SoC,摄像头硬件、内存、存储都和 Pixel 9a 一样,电池续航略有提升,Pixelsnap Qi2 无线充电和 Gemini AI 高级功能都没有提供。