OrangeBot.AI Digest — 2026-02-23
52 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Flock cameras gifted by Horowitz Foundation, avoiding public oversight (thenevadaindependent.com)
- Americans are destroying Flock surveillance cameras (techcrunch.com)
- Binance fired employees who found $1.7B in crypto was sent to Iran (www.nytimes.com)
- ASML unveils EUV light source advance that could yield 50% more chips by 2030 (www.reuters.com)
- Show HN: PgDog – Scale Postgres without changing the app (github.com)
- The Age Verification Trap: Verifying age undermines everyone's data protection (spectrum.ieee.org)
- The peculiar case of Japanese web design (2022) (sabrinas.space)
- Hetzner (European hosting provider) to increase prices by up to 38% (old.reddit.com)
- Ladybird adopts Rust (ladybird.org)
- Hacker News.love – 22 projects Hacker News didn't love (hackernews.love)
- Magical Mushroom – Europe's first industrial-scale mycelium packaging producer (magicalmushroom.com)
- Hetzner Prices increase 30-40% (docs.hetzner.com)
- Elsevier shuts down its finance journal citation cartel (www.chrisbrunet.com)
- Pope tells priests to use their brains, not AI, to write homilies (www.ewtnnews.com)
- Sub-$200 Lidar could reshuffle auto sensor economics (spectrum.ieee.org)
GitHub Trending(14)
- x1xhlol / system-prompts-and-models-of-ai-tools
FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models
- huggingface / skills
- OpenBB-finance / OpenBB
Financial data platform for analysts, quants and AI agents.
- muratcankoylan / Agent-Skills-for-Context-Engineering
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require effective context management.
- f / prompts.chat
a.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.
- CompVis / stable-diffusion
A latent text-to-image diffusion model
- abhigyanpatwari / GitNexus
GitNexus: The Zero-Server Code Intelligence Engine - GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration
- Stremio / stremio-web
Stremio - Freedom to Stream
- stan-smith / FossFLOW
Make beautiful isometric infrastructure diagrams
- VectifyAI / PageIndex
📑 PageIndex: Document Index for Vectorless, Reasoning-based RAG
- cloudflare / agents
Build and deploy AI Agents on Cloudflare
- siteboon / claudecodeui
Use Claude Code, Cursor CLI or Codex on mobile and web with CloudCLI (aka Claude Code UI). CloudCLI is a free open source webui/GUI that helps you manage your Claude Code session and projects remotely
- NevaMind-AI / memU
Memory for 24/7 proactive agents like openclaw (moltbot, clawdbot).
- clash-verge-rev / clash-verge-rev
A modern GUI client based on Tauri, designed to run in Windows, macOS and Linux for tailored proxy experience
Hugging Face(15)
- VESPO: Variational Sequence-Level Soft Policy Optimization for Stable Off-Policy LLM Training
Training stability remains a central challenge in reinforcement learning (RL) for large language models (LLMs). Policy staleness, asynchronous training, and mismatches between training and inference engines all cause the behavior policy to diverge from the current policy, risking training collapse. Importance sampling provides a principled correction for this distribution shift but suffers from high variance; existing remedies such as token-level clipping and sequence-level normalization lack a unified theoretical foundation. We propose Variational sEquence-level Soft Policy Optimization (VESPO). By incorporating variance reduction into a variational formulation over proposal distributions, VESPO derives a closed-form reshaping kernel that operates directly on sequence-level importance weights without length normalization. Experiments on mathematical reasoning benchmarks show that VESPO maintains stable training under staleness ratios up to 64x and fully asynchronous execution, and delivers consistent gains across both dense and Mixture-of-Experts models. Code is available at https://github.com/FloyedShen/VESPO
- Does Your Reasoning Model Implicitly Know When to Stop Thinking?
Recent advancements in large reasoning models (LRMs) have greatly improved their capabilities on complex reasoning tasks through Long Chains of Thought (CoTs). However, this approach often results in substantial redundancy, impairing computational efficiency and causing significant delays in real-time applications. Recent studies show that longer reasoning chains are frequently uncorrelated with correctness and can even be detrimental to accuracy. In a further in-depth analysis of this phenomenon, we surprisingly uncover and empirically verify that LRMs implicitly know the appropriate time to stop thinking, while this capability is obscured by current sampling paradigms. Motivated by this, we introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that unleashes this efficient reasoning potential. Furthermore, integrating SAGE as mixed sampling into group-based reinforcement learning (SAGE-RL) enables SAGE-RL to effectively incorporate SAGE-discovered efficient reasoning patterns into standard pass@1 inference, markedly enhancing both the reasoning accuracy and efficiency of LRMs across multiple challenging mathematical benchmarks.
- Generated Reality: Human-centric World Simulation using Interactive Video Generation with Hand and Camera Control
Extended reality (XR) demands generative models that respond to users' tracked real-world motion, yet current video world models accept only coarse control signals such as text or keyboard input, limiting their utility for embodied interaction. We introduce a human-centric video world model that is conditioned on both tracked head pose and joint-level hand poses. For this purpose, we evaluate existing diffusion transformer conditioning strategies and propose an effective mechanism for 3D head and hand control, enabling dexterous hand--object interactions. We train a bidirectional video diffusion model teacher using this strategy and distill it into a causal, interactive system that generates egocentric virtual environments. We evaluate this generated reality system with human subjects and demonstrate improved task performance as well as a significantly higher level of perceived amount of control over the performed actions compared with relevant baselines.
- Spanning the Visual Analogy Space with a Weight Basis of LoRAs
Visual analogy learning enables image manipulation through demonstration rather than textual description, allowing users to specify complex transformations difficult to articulate in words. Given a triplet {a, a', b}, the goal is to generate b' such that a : a' :: b : b'. Recent methods adapt text-to-image models to this task using a single Low-Rank Adaptation (LoRA) module, but they face a fundamental limitation: attempting to capture the diverse space of visual transformations within a fixed adaptation module constrains generalization capabilities. Inspired by recent work showing that LoRAs in constrained domains span meaningful, interpolatable semantic spaces, we propose LoRWeB, a novel approach that specializes the model for each analogy task at inference time through dynamic composition of learned transformation primitives, informally, choosing a point in a "space of LoRAs". We introduce two key components: (1) a learnable basis of LoRA modules, to span the space of different visual transformations, and (2) a lightweight encoder that dynamically selects and weighs these basis LoRAs based on the input analogy pair. Comprehensive evaluations demonstrate our approach achieves state-of-the-art performance and significantly improves generalization to unseen visual transformations. Our findings suggest that LoRA basis decompositions are a promising direction for flexible visual manipulation. Code and data are in https://research.nvidia.com/labs/par/lorweb
- Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers
Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on MATH500 at high sampling temperatures.
- EgoPush: Learning End-to-End Egocentric Multi-Object Rearrangement for Mobile Robots
Humans can rearrange objects in cluttered environments using egocentric perception, navigating occlusions without global coordinates. Inspired by this capability, we study long-horizon multi-object non-prehensile rearrangement for mobile robots using a single egocentric camera. We introduce EgoPush, a policy learning framework that enables egocentric, perception-driven rearrangement without relying on explicit global state estimation that often fails in dynamic scenes. EgoPush designs an object-centric latent space to encode relative spatial relations among objects, rather than absolute poses. This design enables a privileged reinforcement-learning (RL) teacher to jointly learn latent states and mobile actions from sparse keypoints, which is then distilled into a purely visual student policy. To reduce the supervision gap between the omniscient teacher and the partially observed student, we restrict the teacher's observations to visually accessible cues. This induces active perception behaviors that are recoverable from the student's viewpoint. To address long-horizon credit assignment, we decompose rearrangement into stage-level subproblems using temporally decayed, stage-local completion rewards. Extensive simulation experiments demonstrate that EgoPush significantly outperforms end-to-end RL baselines in success rate, with ablation studies validating each design choice. We further demonstrate zero-shot sim-to-real transfer on a mobile platform in the real world. Code and videos are available at https://ai4ce.github.io/EgoPush/.
- SARAH: Spatially Aware Real-time Agentic Humans
As embodied agents become central to VR, telepresence, and digital human applications, their motion must go beyond speech-aligned gestures: agents should turn toward users, respond to their movement, and maintain natural gaze. Current methods lack this spatial awareness. We close this gap with the first real-time, fully causal method for spatially-aware conversational motion, deployable on a streaming VR headset. Given a user's position and dyadic audio, our approach produces full-body motion that aligns gestures with speech while orienting the agent according to the user. Our architecture combines a causal transformer-based VAE with interleaved latent tokens for streaming inference and a flow matching model conditioned on user trajectory and audio. To support varying gaze preferences, we introduce a gaze scoring mechanism with classifier-free guidance to decouple learning from control: the model captures natural spatial alignment from data, while users can adjust eye contact intensity at inference time. On the Embody 3D dataset, our method achieves state-of-the-art motion quality at over 300 FPS -- 3x faster than non-causal baselines -- while capturing the subtle spatial dynamics of natural conversation. We validate our approach on a live VR system, bringing spatially-aware conversational agents to real-time deployment. Please see https://evonneng.github.io/sarah/ for details.
- Avey-B
Compact pretrained bidirectional encoders remain the backbone of industrial NLP under tight compute and memory budgets. Their effectiveness stems from self-attention's ability to deliver high-quality bidirectional contextualization with sequence-level parallelism, as popularized by BERT-style architectures. Recently, Avey was introduced as an autoregressive, attention-free alternative that naturally admits an encoder-only adaptation. In this paper, we reformulate Avey for the encoder-only paradigm and propose several innovations to its architecture, including decoupled static and dynamic parameterizations, stability-oriented normalization, and neural compression. Results show that this reformulated architecture compares favorably to four widely used Transformer-based encoders, consistently outperforming them on standard token-classification and information-retrieval benchmarks while scaling more efficiently to long contexts.
- VidEoMT: Your ViT is Secretly Also a Video Segmentation Model
Existing online video segmentation models typically combine a per-frame segmenter with complex specialized tracking modules. While effective, these modules introduce significant architectural complexity and computational overhead. Recent studies suggest that plain Vision Transformer (ViT) encoders, when scaled with sufficient capacity and large-scale pre-training, can conduct accurate image segmentation without requiring specialized modules. Motivated by this observation, we propose the Video Encoder-only Mask Transformer (VidEoMT), a simple encoder-only video segmentation model that eliminates the need for dedicated tracking modules. To enable temporal modeling in an encoder-only ViT, VidEoMT introduces a lightweight query propagation mechanism that carries information across frames by reusing queries from the previous frame. To balance this with adaptability to new content, it employs a query fusion strategy that combines the propagated queries with a set of temporally-agnostic learned queries. As a result, VidEoMT attains the benefits of a tracker without added complexity, achieving competitive accuracy while being 5x--10x faster, running at up to 160 FPS with a ViT-L backbone. Code: https://www.tue-mps.org/videomt/
- DeepVision-103K: A Visually Diverse, Broad-Coverage, and Verifiable Mathematical Dataset for Multimodal Reasoning
Reinforcement Learning with Verifiable Rewards (RLVR) has been shown effective in enhancing the visual reflection and reasoning capabilities of Large Multimodal Models (LMMs). However, existing datasets are predominantly derived from either small-scale manual construction or recombination of prior resources, which limits data diversity and coverage, thereby constraining further gains in model performance. To this end, we introduce DeepVision-103K, a comprehensive dataset for RLVR training that covers diverse K12 mathematical topics, extensive knowledge points, and rich visual elements. Models trained on DeepVision achieve strong performance on multimodal mathematical benchmarks, and generalize effectively to general multimodal reasoning tasks. Further analysis reveals enhanced visual perception, reflection and reasoning capabilities in trained models, validating DeepVision's effectiveness for advancing multimodal reasoning. Data: https://huggingface.co/datasets/skylenage/DeepVision-103K{this url}.
- Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions
Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop procedure queries a small, informative subset of individuals while inferring population-level responses via structured similarity. Across three real-world opinion datasets, our method consistently improves population-level response prediction under constrained budgets, including a >12% relative gain on CES at a 10% respondent budget.
- Rubrics as an Attack Surface: Stealthy Preference Drift in LLM Judges
Evaluation and alignment pipelines for large language models increasingly rely on LLM-based judges, whose behavior is guided by natural-language rubrics and validated on benchmarks. We identify a previously under-recognized vulnerability in this workflow, which we term Rubric-Induced Preference Drift (RIPD). Even when rubric edits pass benchmark validation, they can still produce systematic and directional shifts in a judge's preferences on target domains. Because rubrics serve as a high-level decision interface, such drift can emerge from seemingly natural, criterion-preserving edits and remain difficult to detect through aggregate benchmark metrics or limited spot-checking. We further show this vulnerability can be exploited through rubric-based preference attacks, in which benchmark-compliant rubric edits steer judgments away from a fixed human or trusted reference on target domains, systematically inducing RIPD and reducing target-domain accuracy up to 9.5% (helpfulness) and 27.9% (harmlessness). When these judgments are used to generate preference labels for downstream post-training, the induced bias propagates through alignment pipelines and becomes internalized in trained policies. This leads to persistent and systematic drift in model behavior. Overall, our findings highlight evaluation rubrics as a sensitive and manipulable control interface, revealing a system-level alignment risk that extends beyond evaluator reliability alone. The code is available at: https://github.com/ZDCSlab/Rubrics-as-an-Attack-Surface. Warning: Certain sections may contain potentially harmful content that may not be appropriate for all readers.
- ReIn: Conversational Error Recovery with Reasoning Inception
Conversational agents powered by large language models (LLMs) with tool integration achieve strong performance on fixed task-oriented dialogue datasets but remain vulnerable to unanticipated, user-induced errors. Rather than focusing on error prevention, this work focuses on error recovery, which necessitates the accurate diagnosis of erroneous dialogue contexts and execution of proper recovery plans. Under realistic constraints precluding model fine-tuning or prompt modification due to significant cost and time requirements, we explore whether agents can recover from contextually flawed interactions and how their behavior can be adapted without altering model parameters and prompts. To this end, we propose Reasoning Inception (ReIn), a test-time intervention method that plants an initial reasoning into the agent's decision-making process. Specifically, an external inception module identifies predefined errors within the dialogue context and generates recovery plans, which are subsequently integrated into the agent's internal reasoning process to guide corrective actions, without modifying its parameters or system prompts. We evaluate ReIn by systematically simulating conversational failure scenarios that directly hinder successful completion of user goals: user's ambiguous and unsupported requests. Across diverse combinations of agent models and inception modules, ReIn substantially improves task success and generalizes to unseen error types. Moreover, it consistently outperforms explicit prompt-modification approaches, underscoring its utility as an efficient, on-the-fly method. In-depth analysis of its operational mechanism, particularly in relation to instruction hierarchy, indicates that jointly defining recovery tools with ReIn can serve as a safe and effective strategy for improving the resilience of conversational agents without modifying the backbone models or system prompts.
- Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum
Efficient stochastic optimization typically integrates an update direction that performs well in the deterministic regime with a mechanism adapting to stochastic perturbations. While Adam uses adaptive moment estimates to promote stability, Muon utilizes the weight layers' matrix structure via orthogonalized momentum, showing superior performance in large language model training. We propose a new optimizer and a diagonal extension, NAMO and NAMO-D, providing the first principled integration of orthogonalized momentum with norm-based Adam-type noise adaptation. NAMO scales orthogonalized momentum using a single adaptive stepsize, preserving orthogonality while improving upon Muon at negligible additional cost. NAMO-D instead right-multiplies orthogonalized momentum by a diagonal matrix with clamped entries. This design enables neuron-wise noise adaptation and aligns with the common near block-diagonal Hessian structure. Under standard assumptions, we establish optimal convergence rates for both algorithms in the deterministic setting and show that, in the stochastic setting, their convergence guarantees adapt to the noise level of stochastic gradients. Experiments on pretraining GPT-2 models demonstrate improved performance of both NAMO and NAMO-D compared to the AdamW and Muon baselines, with NAMO-D achieving further gains over NAMO via an additional clamping hyperparameter that balances the competing goals of maintaining a well-conditioned update direction and leveraging fine-grained noise adaptation.
- 4RC: 4D Reconstruction via Conditional Querying Anytime and Anywhere
We present 4RC, a unified feed-forward framework for 4D reconstruction from monocular videos. Unlike existing approaches that typically decouple motion from geometry or produce limited 4D attributes such as sparse trajectories or two-view scene flow, 4RC learns a holistic 4D representation that jointly captures dense scene geometry and motion dynamics. At its core, 4RC introduces a novel encode-once, query-anywhere and anytime paradigm: a transformer backbone encodes the entire video into a compact spatio-temporal latent space, from which a conditional decoder can efficiently query 3D geometry and motion for any query frame at any target timestamp. To facilitate learning, we represent per-view 4D attributes in a minimally factorized form by decomposing them into base geometry and time-dependent relative motion. Extensive experiments demonstrate that 4RC outperforms prior and concurrent methods across a wide range of 4D reconstruction tasks.
Solidot(8)
- 脑腐是否是真的?
浏览太多刺激大脑多巴胺的社媒内容是否会导致脑腐(Brain Rot)?根据多项研究,这可能是真的。研究显示,滚动浏览 TikTok、Instagram or YouTube Shorts 等平台上的短视频会影响注意力、记忆力和心理健康。有研究发现短视频使用量增加与认知能力下降和焦虑加剧相关。根据发表在《Translational Psychiatry》期刊上的一项研究,对逾 7000 名儿童的分析发现,屏幕使用时间越长,大脑部分区域的皮层厚度越小。皮层是负责高级思维、记忆和决策的大脑区域,它对控制成瘾行为也非常重要。另一项研究发现,如果儿童手机移除了社媒应用,但不限制他们使用手机,那么负面影响会显著减少。
- I2P 匿名网络遭遇来自 Kimwolf 僵尸网络的女巫攻击
I2P 匿名网络在 2 月 3 日遭遇了来自 Kimwolf 物联网僵尸网络的女巫攻击(Sybil attack)。所谓女巫攻击是指攻击者通过创建女巫(Sybil)节点操控整个网络系统,破坏了系统的正常运行。I2P 去中心化匿名网络通常只有 1.5-2 万个活跃设备,但当天涌入的恶意节点多达 70 万个,恶意节点的数量是合法节点的 39 倍。Kimwolf 的主要 CC 指令控制服务器此前遭到了 Google 等公司的破坏,该僵尸网络的运营者在 Discord 上表示它尝试将 I2P 网络作为备用的 CC 基础设施,结果意外破坏了 I2P 网络。I2P 团队在 6 天后释出了 v2.11.0,加入了针对女巫攻击的缓解措施,默认启用了后量子加密算法 ML-KEM 和 X25519。
- 当AI成为生产资料,谈谈技术格局
Nala Ginrut 写道: 当 AI 成为生产力基础设施时,我们是否仍然保有迁移能力与选择权?如果今天是窗口期,那么在窗口期内做出怎样的准备,才能避免在锁定期和收缩期中被动应对? 这里涉及一个概念,我称之为“技术格局”。它并不意味着对抗或拒绝平台,也不是强调自给自足,而是指在关键生产工具上,个体能够保留基本的迁移能力与选择空间。
- DNA 技术和家谱数据库破解 1982 年的谋杀案
DNA 技术和家谱基因数据库再次帮助警方破解了一起陈年悬案。加州 Cloverdale 的 13 岁女孩 Sarah Geer 于 1982 年 5 月 23 日晚上离开朋友家后失踪,第二天早上一名消防员发现了她的尸体。她的死被定为谋杀,但因为技术限制,未能确定谋杀嫌疑人。这起案件被搁置了逾 40 年。FBI 根据 Sarah 身上收集的 DNA 以及家谱基因数据库判断凶手是四兄弟之一,调查人员对他们进行了监视,收集了丢弃的香烟,确定现年 64 岁的 James Unick 是凶手。在 Sarah 遇害近 44 年后,陪审团于 2 月 13 日裁定其谋杀罪名成立。当地检方在一份声明中表示,虽然 44 年的等待实在太久,但正义终得伸张。
- 微软游戏业务高管离职,接替者来自 AI 部门
微软 Xbox 和游戏业务 CEO Phil Spencer 在公司工作 38 年之后离职,被广泛视为其接任者的 Xbox 总裁 Sarah Bond 也已辞职,游戏业务的新 CEO 将是负责 CoreAI 产品的 Asha Sharma。Spencer 是在 2014 年 3 月被任命为 Xbox 负责人,他任内推出了游戏订阅服务 Xbox Game Pass,最为人所知的事情可能是以 690 亿美元收购动视暴雪。他还收购了一系列游戏工作室,其中包括 2020 年以 75 亿美元收购 Bethesda 母公司 ZeniMax,完全控制了著名的游戏 IP 如 《辐射》和《上古卷轴》(Bethesda)、《毁灭战士》和《雷神之锤》(id Software)。
- NASA 计划 3 月 6 日执行 Artemis II 载人绕月任务
NASA 计划于 3 月 6 日执行 Artemis II 载人绕月任务。执行该任务的登月火箭 Space Launch System (SLS)已经竖立在佛罗里达肯尼迪太空中心的发射台上。NASA 官员将在下周对其进行为期数天的飞行准备评估,确保火箭各个方面都准备就绪。本月早些时候 SLS 在首次测试火箭燃料加注时遭遇了液氢泄漏问题,官员称在更换部分密封件之后该问题看起来已经解决了。
- 维基百科屏蔽 Archive.today
存档网站 Archive.today 在其 CAPTCHA 验证页面嵌入脚本对博主 Jani Patokallio 的个人博客 Gyrovague 发动 DDoS 攻击之后,维基百科正式决定将 Archive.today 以及相关域名 archive.is、archive.ph、archive.fo、archive.li、archive.md 和 archive.vn 加入到黑名单,开始着手在维基百科条目中移除 Archive.today 的链接。Archive.today 被广泛用于绕过付费墙,维基百科有 40 万条目包含了逾 69.5 万 Archive.today 链接,这些链接将被替换为 Internet Archive、Ghostarchive 或 Megalodon 的存档链接,或者原始文章链接。Patokallio 在澳大利亚工作,他是芬兰人,Archive.today 的运营者针对芬兰 IP 的 CAPTCHA 验证页面嵌入脚本对其博客发动攻击,至今该脚本仍然存在,对 Patokallio 的 DDoS 攻击仍然在进行之中。
- 在是否保持 Android 开放上 Google 口惠而实不至
Google 去年 8 月宣布将从 2026 年 9 月起强制执行应用开发者身份验证政策,禁止在 Android 设备上安装未验证身份的开发者的应用。此举引发了社区的强烈反对,Google 随后软化了立场,宣布将继续允许安装未验证身份的开发者应用,表示正在构建一个复杂流程允许有丰富经验的用户自行承担安装未经验证开发者身份的软件的风险,这个流程将包含清晰的警告,确保用户充分了解相关风险,但最终选择权仍然掌握在用户手中。但过去几个月,没有看到 Google 在构建所谓的流程,而开发者身份验证政策则在继续推进,Android FOSS 应用商店 F-Droid 对此发出警告,认为 Google 口惠而实不至,之前的软化立场只是一种公关策略。