DIGEST · 2026-02-20

OrangeBot.AI Digest — 2026-02-20

56 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Turn Dependabot Off (words.filippo.io)
  2. Wikipedia deprecates Archive.today, starts removing archive links (arstechnica.com)
  3. I found a Vulnerability. They found a Lawyer (dixken.de)
  4. Facebook is cooked (pilk.website)
  5. Keep Android Open (f-droid.org)
  6. Trump's global tariffs struck down by US Supreme Court (www.bbc.com)
  7. Child's Play: Tech's new generation and the end of thinking (harpers.org)
  8. I found a useful Git one liner buried in leaked CIA developer docs (spencer.wtf)
  9. Ggml.ai joins Hugging Face to ensure the long-term progress of Local AI (github.com)
  10. PayPal discloses data breach that exposed user info for 6 months (www.bleepingcomputer.com)
  11. Nvidia and OpenAI abandon unfinished $100B deal in favour of $30B investment (www.ft.com)
  12. The path to ubiquitous AI (17k tokens/sec) (taalas.com)
  13. Untapped Way to Learn a Codebase: Build a Visualizer (jimmyhmiller.com)
  14. I tried building my startup entirely on European infrastructure (www.coinerella.com)
  15. FreeCAD (www.freecad.org)

GitHub Trending(15)

  1. vxcontrol / pentagi

    ✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks

  2. blackboardsh / electrobun

    Build ultra fast, tiny, and cross-platform desktop apps with Typescript.

  3. HailToDodongo / pyrite64

    N64 Game-Engine and Editor using libdragon & tiny3d

  4. obra / superpowers

    An agentic skills framework & software development methodology that works.

  5. aquasecurity / trivy

    Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more

  6. PostHog / posthog

    🦔 PostHog is an all-in-one developer platform for building successful products. We offer product analytics, web analytics, session replay, error tracking, feature flags, experimentation, surveys, data warehouse, a CDP, and an AI product assistant to help debug your code, ship features faster, and keep all your usage and customer data in one stack.

  7. eslint / eslint

    Find and fix problems in your JavaScript code.

  8. anthropics / claude-plugins-official

    Official, Anthropic-managed directory of high quality Claude Code Plugins.

  9. Effect-TS / effect-smol

    Core libraries and experimental work for Effect v4

  10. google-research / timesfm

    TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.

  11. roboflow / trackers

    Trackers gives you clean, modular re-implementations of leading multi-object tracking algorithms released under the permissive Apache 2.0 license. You combine them with any detection model you already use.

  12. huggingface / skills
  13. databricks-solutions / ai-dev-kit

    Databricks Toolkit for Coding Agents provided by Field Engineering

  14. freemocap / freemocap

    Free Motion Capture for Everyone 💀✨

  15. ComposioHQ / composio

    Composio powers 1000+ toolkits, tool search, context management, authentication, and a sandboxed workbench to help you build AI agents that turn intent into action.

Hugging Face(15)

  1. SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

    Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.

  2. Unified Latents (UL): How to train your latents

    We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model. By linking the encoder's output noise to the prior's minimum noise level, we obtain a simple training objective that provides a tight upper bound on the latent bitrate. On ImageNet-512, our approach achieves competitive FID of 1.4, with high reconstruction quality (PSNR) while requiring fewer training FLOPs than models trained on Stable Diffusion latents. On Kinetics-600, we set a new state-of-the-art FVD of 1.3.

  3. Mobile-Agent-v3.5: Multi-platform Fundamental GUI Agents

    The paper introduces GUI-Owl-1.5, the latest native GUI agent model that features instruct/thinking variants in multiple sizes (2B/4B/8B/32B/235B) and supports a range of platforms (desktop, mobile, browser, and more) to enable cloud-edge collaboration and real-time interaction. GUI-Owl-1.5 achieves state-of-the-art results on more than 20+ GUI benchmarks on open-source models: (1) on GUI automation tasks, it obtains 56.5 on OSWorld, 71.6 on AndroidWorld, and 48.4 on WebArena; (2) on grounding tasks, it obtains 80.3 on ScreenSpotPro; (3) on tool-calling tasks, it obtains 47.6 on OSWorld-MCP, and 46.8 on MobileWorld; (4) on memory and knowledge tasks, it obtains 75.5 on GUI-Knowledge Bench. GUI-Owl-1.5 incorporates several key innovations: (1) Hybird Data Flywheel: we construct the data pipeline for UI understanding and trajectory generation based on a combination of simulated environments and cloud-based sandbox environments, in order to improve the efficiency and quality of data collection. (2) Unified Enhancement of Agent Capabilities: we use a unified thought-synthesis pipeline to enhance the model's reasoning capabilities, while placing particular emphasis on improving key agent abilities, including Tool/MCP use, memory and multi-agent adaptation; (3) Multi-platform Environment RL Scaling: We propose a new environment RL algorithm, MRPO, to address the challenges of multi-platform conflicts and the low training efficiency of long-horizon tasks. The GUI-Owl-1.5 models are open-sourced, and an online cloud-sandbox demo is available at https://github.com/X-PLUG/MobileAgent.

  4. "What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing

    Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.

  5. Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

    LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

  6. Arcee Trinity Large Technical Report

    We present the technical report for Arcee Trinity Large, a sparse Mixture-of-Experts model with 400B total parameters and 13B activated per token. Additionally, we report on Trinity Nano and Trinity Mini, with Trinity Nano having 6B total parameters with 1B activated per token, Trinity Mini having 26B total parameters with 3B activated per token. The models' modern architecture includes interleaved local and global attention, gated attention, depth-scaled sandwich norm, and sigmoid routing for Mixture-of-Experts. For Trinity Large, we also introduce a new MoE load balancing strategy titled Soft-clamped Momentum Expert Bias Updates (SMEBU). We train the models using the Muon optimizer. All three models completed training with zero loss spikes. Trinity Nano and Trinity Mini were pre-trained on 10 trillion tokens, and Trinity Large was pre-trained on 17 trillion tokens. The model checkpoints are available at https://huggingface.co/arcee-ai.

  7. TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

    Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).

  8. DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers

    Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to 3.52times and 3.2times speedup on FLUX-1.Dev and Wan 2.1, respectively, without compromising the generation quality and prompt adherence.

  9. Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5

    To understand and identify the unprecedented risks posed by rapidly advancing artificial intelligence (AI) models, Frontier AI Risk Management Framework in Practice presents a comprehensive assessment of their frontier risks. As Large Language Models (LLMs) general capabilities rapidly evolve and the proliferation of agentic AI, this version of the risk analysis technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R\&D, and self-replication. Specifically, we introduce more complex scenarios for cyber offense. For persuasion and manipulation, we evaluate the risk of LLM-to-LLM persuasion on newly released LLMs. For strategic deception and scheming, we add the new experiment with respect to emergent misalignment. For uncontrolled AI R\&D, we focus on the ``mis-evolution'' of agents as they autonomously expand their memory substrates and toolsets. Besides, we also monitor and evaluate the safety performance of OpenClaw during the interaction on the Moltbook. For self-replication, we introduce a new resource-constrained scenario. More importantly, we propose and validate a series of robust mitigation strategies to address these emerging threats, providing a preliminary technical and actionable pathway for the secure deployment of frontier AI. This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.

  10. ArXiv-to-Model: A Practical Study of Scientific LM Training

    While frontier large language models demonstrate strong reasoning and mathematical capabilities, the practical process of training domain-specialized scientific language models from raw sources remains under-documented. In this work, we present a detailed case study of training a 1.36B-parameter scientific language model directly from raw arXiv LaTeX sources spanning mathematics, computer science, and theoretical physics. We describe an end-to-end pipeline covering metadata filtering, archive validation, LaTeX extraction, text normalization, domain-aware tokenization, and dense transformer training under constrained compute (2xA100 GPUs). Through 24 experimental runs, we analyze training stability, scaling behavior, data yield losses, and infrastructure bottlenecks. Our findings highlight how preprocessing decisions significantly affect usable token volume, how tokenization impacts symbolic stability, and how storage and I/O constraints can rival compute as limiting factors. We further analyze convergence dynamics and show stable training behavior in a data-rich regime (52B pretraining tokens). Rather than proposing a novel architecture, this work provides an engineering-grounded, transparent account of training a small scientific language model from scratch. We hope these insights support researchers operating under moderate compute budgets who seek to build domain-specialized models.

  11. Discovering Multiagent Learning Algorithms with Large Language Models

    Much of the advancement of Multi-Agent Reinforcement Learning (MARL) in imperfect-information games has historically depended on manual iterative refinement of baselines. While foundational families like Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO) rest on solid theoretical ground, the design of their most effective variants often relies on human intuition to navigate a vast algorithmic design space. In this work, we propose the use of AlphaEvolve, an evolutionary coding agent powered by large language models, to automatically discover new multiagent learning algorithms. We demonstrate the generality of this framework by evolving novel variants for two distinct paradigms of game-theoretic learning. First, in the domain of iterative regret minimization, we evolve the logic governing regret accumulation and policy derivation, discovering a new algorithm, Volatility-Adaptive Discounted (VAD-)CFR. VAD-CFR employs novel, non-intuitive mechanisms-including volatility-sensitive discounting, consistency-enforced optimism, and a hard warm-start policy accumulation schedule-to outperform state-of-the-art baselines like Discounted Predictive CFR+. Second, in the regime of population based training algorithms, we evolve training-time and evaluation-time meta strategy solvers for PSRO, discovering a new variant, Smoothed Hybrid Optimistic Regret (SHOR-)PSRO. SHOR-PSRO introduces a hybrid meta-solver that linearly blends Optimistic Regret Matching with a smoothed, temperature-controlled distribution over best pure strategies. By dynamically annealing this blending factor and diversity bonuses during training, the algorithm automates the transition from population diversity to rigorous equilibrium finding, yielding superior empirical convergence compared to standard static meta-solvers.

  12. On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking

    We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task. Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics. While prior work has identified that individual neurons learn single-frequency Fourier features and phase alignment, it does not fully explain how these features combine into a global solution. We bridge this gap by formalizing a diversification condition that emerges during training when overparametrized, consisting of two parts: phase symmetry and frequency diversification. We prove that these properties allow the network to collectively approximate a flawed indicator function on the correct logic for the modular addition task. While individual neurons produce noisy signals, the phase symmetry enables a majority-voting scheme that cancels out noise, allowing the network to robustly identify the correct sum. Furthermore, we explain the emergence of these features under random initialization via a lottery ticket mechanism. Our gradient flow analysis proves that frequencies compete within each neuron, with the "winner" determined by its initial spectral magnitude and phase alignment. From a technical standpoint, we provide a rigorous characterization of the layer-wise phase coupling dynamics and formalize the competitive landscape using the ODE comparison lemma. Finally, we use these insights to demystify grokking, characterizing it as a three-stage process involving memorization followed by two generalization phases, driven by the competition between loss minimization and weight decay.

  13. FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment

    Enabling VLA models to predict environmental dynamics, known as world modeling, has been recognized as essential for improving robotic reasoning and generalization. However, current approaches face two main issues: 1. The training objective forces models to over-emphasize pixel-level reconstruction, which constrains semantic learning and generalization 2. Reliance on predicted future observations during inference often leads to error accumulation. To address these challenges, we introduce Future Representation Alignment via Parallel Progressive Expansion (FRAPPE). Our method adopts a two-stage fine-tuning strategy: In the mid-training phase, the model learns to predict the latent representations of future observations; In the post-training phase, we expand the computational workload in parallel and align the representation simultaneously with multiple different visual foundation models. By significantly improving fine-tuning efficiency and reducing dependence on action-annotated data, FRAPPE provides a scalable and data-efficient pathway to enhance world-awareness in generalist robotic policies. Experiments on the RoboTwin benchmark and real-world tasks demonstrate that FRAPPE outperforms state-of-the-art approaches and shows strong generalization in long-horizon and unseen scenarios.

  14. Computer-Using World Model

    Agents operating in complex software environments benefit from reasoning about the consequences of their actions, as even a single incorrect user interface (UI) operation can derail long, artifact-preserving workflows. This challenge is particularly acute for computer-using scenarios, where real execution does not support counterfactual exploration, making large-scale trial-and-error learning and planning impractical despite the environment being fully digital and deterministic. We introduce the Computer-Using World Model (CUWM), a world model for desktop software that predicts the next UI state given the current state and a candidate action. CUWM adopts a two-stage factorization of UI dynamics: it first predicts a textual description of agent-relevant state changes, and then realizes these changes visually to synthesize the next screenshot. CUWM is trained on offline UI transitions collected from agents interacting with real Microsoft Office applications, and further refined with a lightweight reinforcement learning stage that aligns textual transition predictions with the structural requirements of computer-using environments. We evaluate CUWM via test-time action search, where a frozen agent uses the world model to simulate and compare candidate actions before execution. Across a range of Office tasks, world-model-guided test-time scaling improves decision quality and execution robustness.

  15. CrispEdit: Low-Curvature Projections for Scalable Non-Destructive LLM Editing

    A central challenge in large language model (LLM) editing is capability preservation: methods that successfully change targeted behavior can quietly game the editing proxy and corrupt general capabilities, producing degenerate behaviors reminiscent of proxy/reward hacking. We present CrispEdit, a scalable and principled second-order editing algorithm that treats capability preservation as an explicit constraint, unifying and generalizing several existing editing approaches. CrispEdit formulates editing as constrained optimization and enforces the constraint by projecting edit updates onto the low-curvature subspace of the capability-loss landscape. At the crux of CrispEdit is expressing capability constraint via Bregman divergence, whose quadratic form yields the Gauss-Newton Hessian exactly and even when the base model is not trained to convergence. We make this second-order procedure efficient at the LLM scale using Kronecker-factored approximate curvature (K-FAC) and a novel matrix-free projector that exploits Kronecker structure to avoid constructing massive projection matrices. Across standard model-editing benchmarks, CrispEdit achieves high edit success while keeping capability degradation below 1% on average across datasets, significantly improving over prior editors.

Solidot(11)

  1. 特朗普将下令公开外星人和 UFO 相关文件

    美国总统特朗普(Donald Trump)表示他将命令包括国防部在内的机构公开外星人和 UFO 相关文件。此举被广泛视为是转移公众对爱泼斯坦文件(Epstein files)的注意力。特朗普表示他不知道外星人是否真实存在。上周美国前总统奥巴马(Barack Obama)接受 Brian Tyler Cohen 采访时表示他相信外星人真实存在,但他没有见过外星人,外星人也没有关在 51 区,51 区也不存在秘密的地下设施,除非真的存在某种巨大阴谋,阴谋者甚至向美国总统隐瞒了外星人的存在。

  2. FBI 线人协助运营了暗网毒品市场 Incognito

    在暗网毒品市场 Incognito 管理员 Lin Rui-Siang 的量刑听证会上,他的辩护律师首次披露,Incognito 的另一名管理员是 FBI 的线人,而这位管理员负责了 Incognito 大部分的毒品交易,Lin 坚称他主要负责代码和网站的技术基础设施。Lin 在狱中接受采访时表示,未披露名字的 FBI 线人是 Incognito 的正式合伙人,负责了网站大部分的管理工作,包括解决纠纷,允许哪些卖家出售毒品哪些卖家将被剔除。Lin 称是 FBI 在运营这个毒品交易网站。检方则否认该线人是 Lin 的同级,称线人是其下属不具有平等关系,否认政府运营该网站。

  3. Mozilla 建议 Windows 7/8/8.1 用户切换到 Linux

    Mozilla 称,2023 年 7 月释出的 Firefox 115 ESR 是 Firefox 支持 Windows 7/8/8.1 的最后一个版本,它将在 2026 年 2 月底为 Firefox 115 ESR 释出最后一个安全更新,之后将停止对它的支持。它建议 Windows 7/8/8.1 用户升级到 Windows 10 或更新版本,如果他们的 PC 由于微软设定的限制无法升级到 Windows 10 或 11,Mozilla 建议用户切换到 Linux 发行版。

  4. 家猫的癌症基因组

    癌症是家猫常见死因,但家猫的癌症基因组却知之甚少。根据本周《科学》期刊上的一项研究,科学家对 13 种不同类型猫科癌症的 493 份样本以及相匹配的健康对照组织进行了癌症基因测序。家猫与人类不仅有着相同的生活环境,且常与人类主人罹患相同的非癌合并症(如糖尿病),这使得家猫成为肿瘤研究中重要但未尽其用的资源。研究人员在将近 1000 个人类癌症基因与其对应的猫科动物基因进行比对后发现,这两个物种中存在一些流行程度相似的致癌基因如 TP53。研究人员还在猫的致癌基因组中发现了癌症驱动基因、肿瘤易感基因以及存在某些病毒序列的证据。

  5. Fedora 项目对叙利亚解除 IP 封禁

    Fedora Infrastructure Team 团队上周解除了对叙利亚 IP 地址范围块的封禁,允许叙利亚网民下载 Fedora Linux 发行版的镜像,恢复了他们对 Fedora Linux RPM 库、Fedora Account System 和 Fedora 构建系统的访问。叙利亚用户访问 Fedora 项目不再受到限制。特朗普政府去年解除了对叙利亚的制裁,美国商务部随后也放宽了对叙利亚的出口管制政策,Fedora 项目此举就是对出口管制政策变化的回应。

  6. 美国停止资助互联网自由项目

    过去十年美国向“互联网自由(Internet Freedom)”项目资助了逾 10 亿美元,但特朗普上台之后,相关项目几乎立即停止了拨款。作为政府大规模裁员的一部分,项目的资深员工在 2025 年辞职或被解雇,许多子项目永久取消。负责管理半数项目资金的非盈利组织 Open Technology Fund(OTF)去年 12 月赢得了对政府的诉讼,但特朗普政府正对裁决提起上诉。特朗普政府也在今年 1 月退出了捍卫数字权利的联盟 Freedom Online Coalition。通过互联网自由项目获得拨款的流行工具包括了加密通讯服务 Signal 和 Tor 浏览器,以及其它规避审查的工具和技术。

  7. 99% 的 40 岁以上成年人存在一处肩袖异常

    根据发表在《JAMA Internal Medicine》期刊上的一项研究,MRI 成像显示 99% 的 40 岁以上成年人存在一处肩袖异常,研究人员认为如此高比例的人存在相似情况,那么这种情况不应该视为异常而应该视为无需治疗的正常情况。肩袖是稳定肩膀且允许肩关节广泛活动的一群肌肉及其肌腱。有 602 名年龄在 41-76 岁的参与者完成了研究,82% 的人报告没有肩部症状,18% 报告有症状。MRI 成像显示 595 人(99%)至少存在一处肩袖异常,最常见的异常是部分撕裂(62%),其次是肌腱病(25%)和全厚度撕裂(11%),男女比例类似,其中全厚度撕裂 45 岁以下参与者没有发现,70-76 岁人群比例最高。

  8. Google 的新手机 Pixel 10a 基本上就是去年的 Pixel 9a

    Google 推出了新中端手机 Pixel 10a,其配置和 500 美元价格与去年推出的 Pixel 9a 基本相同,最显著的变化是摄像头不再凸起了,手机可以在桌子上滑来滑去了。Pixel 10a 显示屏分辨率和 Pixel 9a 相同,保护玻璃和最高亮度略有提升,处理器仍然是 Tensor G4 而不是其它 Pixel 10 系列采用的新一代 Tensor G5 SoC,摄像头硬件、内存、存储都和 Pixel 9a 一样,电池续航略有提升,Pixelsnap Qi2 无线充电和 Gemini AI 高级功能都没有提供。

  9. 加蓬屏蔽所有社交媒体

    加蓬媒体监管机构 High Authority for Communication(HAC)以传播“虚假信息传播”、“网络欺凌”和“未经授权披露个人数据”等理由宣布屏蔽所有社交媒体至另行通知为止,称网络内容加剧了冲突,加深了分裂。加蓬的 Brice Oligui Nguema 将军在 2023 年发动了军事政变,去年赢得了总统大选,他正面临日益严重的社会动荡,教师等公务员因薪酬和工作条件而举行罢工。Netblocks 的监测显示,WhatsApp、Facebook 和 TikTok 的主要社交媒体都已被屏蔽。

  10. 14 岁少年的折纸结构能承受自身万倍的重量

    14 岁 Miles Wu 设计的一种三浦折叠法(Miura-Ori)折纸结构能承受自身万倍的重量,为他赢得了 Thermo Fisher 少年科学发明比赛(Thermo Fisher Scientific Junior Innovators Challenge)的 2.5 万美元大奖。他目前是纽约 Hunter College 中学的一名 9 年级学生。Wu 一直着迷于日本折纸艺术,他从 6 年前开始将其作为一种业余爱好,2024 年他开始探索折纸在艺术之外的其它用途。他尤其对三浦折叠法感兴趣,该折叠技术由日本学者三浦公亮(Koryo Miura)发明,只需要拉作品的对角两端,可立即将折纸展开或收缩,具有节约空间、应力分散、不需反覆折叠的优点。它结合了折纸艺术与工程学,可应用于太阳能板展开、地图折叠、建筑结构等领域。三浦折叠法已被应用于飞船和卫星的太阳能板展开。他在研究三浦折叠法时恰逢飓风 Helene 登陆佛罗里达州和南加州山火肆虐,因此思考利用三浦折叠法制造牢固、廉价和易于搭建的应急帐篷,现有的应急帐篷难以三全其美。他测试了 54 种三浦折叠法结构,其中最坚固的结构能承受超过自身重量万倍的重量。

  11. 英国在13 年内关闭了 1.4 万家酒吧

    根据数据分析师 Lauren Leek 的分析,英国自 2009 年以来关闭了逾 1.4 万家酒吧,酒吧注册量从 2009 年的 5.4 万家减少到 2022 年的低于 4 万家。伦敦地区的酒吧数量降幅最小。Leek 的分析还发现酒吧需要聚在一起才能避免关闭的命运,酒吧与相邻最近酒吧之间的距离太遥远的话会很容易倒闭。分析显示,幸存酒吧之间的距离中位数约 280米,倒闭酒吧距离中位数约 640 米。英国有很大部分酒吧被私募股权公司控制,英国最大的酒吧公司 Stonegate 控制着该国 1/11 的酒吧,而它隶属于私募 TDR Capital,它通过巨额债务收购其它酒吧发展壮大,但擅长收购并不意味着擅长经营,它因为杠杆收购而背负逾 40 亿美元的债务。英国目前有四分之一到三分之一的酒吧被私募和海外公司控制。