DIGEST · 2026-02-24

OrangeBot.AI Digest — 2026-02-24

54 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. How we rebuilt Next.js with AI in one week (blog.cloudflare.com)
  2. Mac mini will be made at a new facility in Houston (www.apple.com)
  3. OpenAI, the US government and Persona built an identity surveillance machine (vmfunc.re)
  4. I'm helping my dog vibe code games (www.calebleak.com)
  5. Open Letter to Google on Mandatory Developer Registration for App Distribution (keepandroidopen.org)
  6. We installed a single turnstile to feel secure (idiallo.com)
  7. I pitched a roller coaster to Disneyland at age 10 in 1978 (wordglyph.xyz)
  8. Discord cuts ties with identity verification software, Persona (fortune.com)
  9. Goodbye InnerHTML, Hello SetHTML: Stronger XSS Protection in Firefox 148 (hacks.mozilla.org)
  10. IDF killed Gaza aid workers at point blank range in 2025 massacre: Report (www.dropsitenews.com)
  11. The Missing Semester of Your CS Education – Revised for 2026 (missing.csail.mit.edu)
  12. Diode – Build, program, and simulate hardware (www.withdiode.com)
  13. Unsung heroes: Flickr's URLs scheme (unsung.aresluna.org)
  14. Firefox 148 Launches with AI Kill Switch Feature and More Enhancements (serverhost.com)
  15. Show HN: X86CSS – An x86 CPU emulator written in CSS (lyra.horse)

GitHub Trending(12)

  1. huggingface / skills
  2. muratcankoylan / Agent-Skills-for-Context-Engineering
  3. OpenBB-finance / OpenBB
  4. LadybirdBrowser / ladybird
  5. x1xhlol / system-prompts-and-models-of-ai-tools
  6. obra / superpowers
  7. ruvnet / ruvector
  8. D4Vinci / Scrapling
  9. GVCLab / PersonaLive
  10. HunxByts / GhostTrack
  11. VectifyAI / PageIndex
  12. openemr / openemr

Hugging Face(15)

  1. A Very Big Video Reasoning Suite

    Rapid progress in video models has largely focused on visual quality, leaving their reasoning capabilities underexplored. Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over spatiotemporal structure such as continuity, interaction, and causality. However, systematically studying video reasoning and its scaling behavior is hindered by the lack of large-scale training data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks following a principled taxonomy and over one million video clips, approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning. The data, benchmark toolkit, and models are publicly available at https://video-reason.com/ .

  2. VLANeXt: Recipes for Building Strong VLA Models

    Following the rise of large foundation models, Vision-Language-Action models (VLAs) emerged, leveraging strong visual and language understanding for general-purpose policy learning. Yet, the current VLA landscape remains fragmented and exploratory. Although many groups have proposed their own VLA models, inconsistencies in training protocols and evaluation settings make it difficult to identify which design choices truly matter. To bring structure to this evolving space, we reexamine the VLA design space under a unified framework and evaluation setup. Starting from a simple VLA baseline similar to RT-2 and OpenVLA, we systematically dissect design choices along three dimensions: foundational components, perception essentials, and action modelling perspectives. From this study, we distill 12 key findings that together form a practical recipe for building strong VLA models. The outcome of this exploration is a simple yet effective model, VLANeXt. VLANeXt outperforms prior state-of-the-art methods on the LIBERO and LIBERO-plus benchmarks and demonstrates strong generalization in real-world experiments. We will release a unified, easy-to-use codebase that serves as a common platform for the community to reproduce our findings, explore the design space, and build new VLA variants on top of a shared foundation.

  3. SkillOrchestra: Learning to Route Agents via Skill Transfer

    Compound AI systems promise capabilities beyond those of individual models, yet their success depends critically on effective orchestration. Existing routing approaches face two limitations: (1) input-level routers make coarse query-level decisions that ignore evolving task requirements; (2) RL-trained orchestrators are expensive to adapt and often suffer from routing collapse, repeatedly invoking one strong but costly option in multi-turn scenarios. We introduce SkillOrchestra, a framework for skill-aware orchestration. Instead of directly learning a routing policy end-to-end, SkillOrchestra learns fine-grained skills from execution experience and models agent-specific competence and cost under those skills. At deployment, the orchestrator infers the skill demands of the current interaction and selects agents that best satisfy them under an explicit performance-cost trade-off. Extensive experiments across ten benchmarks demonstrate that SkillOrchestra outperforms SoTA RL-based orchestrators by up to 22.5% with 700x and 300x learning cost reduction compared to Router-R1 and ToolOrchestra, respectively. These results show that explicit skill modeling enables scalable, interpretable, and sample-efficient orchestration, offering a principled alternative to data-intensive RL-based approaches. The code is available at: https://github.com/jiayuww/SkillOrchestra.

  4. TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics

    While Vision-Language-Action (VLA) models have seen rapid progress in pretraining, their advancement in Reinforcement Learning (RL) remains hampered by low sample efficiency and sparse rewards in real-world settings. Developing generalizable process reward models is essential for providing the fine-grained feedback necessary to bridge this gap, yet existing temporal value functions often fail to generalize beyond their training domains. We introduce TOPReward, a novel, probabilistically grounded temporal value function that leverages the latent world knowledge of pretrained video Vision-Language Models (VLMs) to estimate robotic task progress. Unlike prior methods that prompt VLMs to directly output progress values, which are prone to numerical misrepresentation, TOPReward extracts task progress directly from the VLM's internal token logits. In zero-shot evaluations across 130+ distinct real-world tasks and multiple robot platforms (e.g., Franka, YAM, SO-100/101), TOPReward achieves 0.947 mean Value-Order Correlation (VOC) on Qwen3-VL, dramatically outperforming the state-of-the-art GVL baseline which achieves near-zero correlation on the same open-source model. We further demonstrate that TOPReward serves as a versatile tool for downstream applications, including success detection and reward-aligned behavior cloning.

  5. ManCAR: Manifold-Constrained Latent Reasoning with Adaptive Test-Time Computation for Sequential Recommendation

    Sequential recommendation increasingly employs latent multi-step reasoning to enhance test-time computation. Despite empirical gains, existing approaches largely drive intermediate reasoning states via target-dominant objectives without imposing explicit feasibility constraints. This results in latent drift, where reasoning trajectories deviate into implausible regions. We argue that effective recommendation reasoning should instead be viewed as navigation on a collaborative manifold rather than free-form latent refinement. To this end, we propose ManCAR (Manifold-Constrained Adaptive Reasoning), a principled framework that grounds reasoning within the topology of a global interaction graph. ManCAR constructs a local intent prior from the collaborative neighborhood of a user's recent actions, represented as a distribution over the item simplex. During training, the model progressively aligns its latent predictive distribution with this prior, forcing the reasoning trajectory to remain within the valid manifold. At test time, reasoning proceeds adaptively until the predictive distribution stabilizes, avoiding over-refinement. We provide a variational interpretation of ManCAR to theoretically validate its drift-prevention and adaptive test-time stopping mechanisms. Experiments on seven benchmarks demonstrate that ManCAR consistently outperforms state-of-the-art baselines, achieving up to a 46.88% relative improvement w.r.t. NDCG@10. Our code is available at https://github.com/FuCongResearchSquad/ManCAR.

  6. Mobile-O: Unified Multimodal Understanding and Generation on Mobile Device

    Unified multimodal models can both understand and generate visual content within a single architecture. Existing models, however, remain data-hungry and too heavy for deployment on edge devices. We present Mobile-O, a compact vision-language-diffusion model that brings unified multimodal intelligence to a mobile device. Its core module, the Mobile Conditioning Projector (MCP), fuses vision-language features with a diffusion generator using depthwise-separable convolutions and layerwise alignment. This design enables efficient cross-modal conditioning with minimal computational cost. Trained on only a few million samples and post-trained in a novel quadruplet format (generation prompt, image, question, answer), Mobile-O jointly enhances both visual understanding and generation capabilities. Despite its efficiency, Mobile-O attains competitive or superior performance compared to other unified models, achieving 74% on GenEval and outperforming Show-O and JanusFlow by 5% and 11%, while running 6x and 11x faster, respectively. For visual understanding, Mobile-O surpasses them by 15.3% and 5.1% averaged across seven benchmarks. Running in only ~3s per 512x512 image on an iPhone, Mobile-O establishes the first practical framework for real-time unified multimodal understanding and generation on edge devices. We hope Mobile-O will ease future research in real-time unified multimodal intelligence running entirely on-device with no cloud dependency. Our code, models, datasets, and mobile application are publicly available at https://amshaker.github.io/Mobile-O/

  7. Learning Cross-View Object Correspondence via Cycle-Consistent Mask Prediction

    We study the task of establishing object-level visual correspondence across different viewpoints in videos, focusing on the challenging egocentric-to-exocentric and exocentric-to-egocentric scenarios. We propose a simple yet effective framework based on conditional binary segmentation, where an object query mask is encoded into a latent representation to guide the localization of the corresponding object in a target video. To encourage robust, view-invariant representations, we introduce a cycle-consistency training objective: the predicted mask in the target view is projected back to the source view to reconstruct the original query mask. This bidirectional constraint provides a strong self-supervisory signal without requiring ground-truth annotations and enables test-time training (TTT) at inference. Experiments on the Ego-Exo4D and HANDAL-X benchmarks demonstrate the effectiveness of our optimization objective and TTT strategy, achieving state-of-the-art performance. The code is available at https://github.com/shannany0606/CCMP.

  8. Agents of Chaos

    We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies. Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. This report serves as an initial empirical contribution to that broader conversation.

  9. SimToolReal: An Object-Centric Policy for Zero-Shot Dexterous Tool Manipulation

    The ability to manipulate tools significantly expands the set of tasks a robot can perform. Yet, tool manipulation represents a challenging class of dexterity, requiring grasping thin objects, in-hand object rotations, and forceful interactions. Since collecting teleoperation data for these behaviors is challenging, sim-to-real reinforcement learning (RL) is a promising alternative. However, prior approaches typically require substantial engineering effort to model objects and tune reward functions for each task. In this work, we propose SimToolReal, taking a step towards generalizing sim-to-real RL policies for tool manipulation. Instead of focusing on a single object and task, we procedurally generate a large variety of tool-like object primitives in simulation and train a single RL policy with the universal goal of manipulating each object to random goal poses. This approach enables SimToolReal to perform general dexterous tool manipulation at test-time without any object or task-specific training. We demonstrate that SimToolReal outperforms prior retargeting and fixed-grasp methods by 37% while matching the performance of specialist RL policies trained on specific target objects and tasks. Finally, we show that SimToolReal generalizes across a diverse set of everyday tools, achieving strong zero-shot performance over 120 real-world rollouts spanning 24 tasks, 12 object instances, and 6 tool categories.

  10. DSDR: Dual-Scale Diversity Regularization for Exploration in LLM Reasoning

    Reinforcement learning with verifiers (RLVR) is a central paradigm for improving large language model (LLM) reasoning, yet existing methods often suffer from limited exploration. Policies tend to collapse onto a few reasoning patterns and prematurely stop deep exploration, while conventional entropy regularization introduces only local stochasticity and fails to induce meaningful path-level diversity, leading to weak and unstable learning signals in group-based policy optimization. We propose DSDR, a Dual-Scale Diversity Regularization reinforcement learning framework that decomposes diversity in LLM reasoning into global and coupling components. Globally, DSDR promotes diversity among correct reasoning trajectories to explore distinct solution modes. Locally, it applies a length-invariant, token-level entropy regularization restricted to correct trajectories, preventing entropy collapse within each mode while preserving correctness. The two scales are coupled through a global-to-local allocation mechanism that emphasizes local regularization for more distinctive correct trajectories. We provide theoretical support showing that DSDR preserves optimal correctness under bounded regularization, sustains informative learning signals in group-based optimization, and yields a principled global-to-local coupling rule. Experiments on multiple reasoning benchmarks demonstrate consistent improvements in accuracy and pass@k, highlighting the importance of dual-scale diversity for deep exploration in RLVR. Code is available at https://github.com/SUSTechBruce/DSDR.

  11. RoboCurate: Harnessing Diversity with Action-Verified Neural Trajectory for Robot Learning

    Synthetic data generated by video generative models has shown promise for robot learning as a scalable pipeline, but it often suffers from inconsistent action quality due to imperfectly generated videos. Recently, vision-language models (VLMs) have been leveraged to validate video quality, but they have limitations in distinguishing physically accurate videos and, even then, cannot directly evaluate the generated actions themselves. To tackle this issue, we introduce RoboCurate, a novel synthetic robot data generation framework that evaluates and filters the quality of annotated actions by comparing them with simulation replay. Specifically, RoboCurate replays the predicted actions in a simulator and assesses action quality by measuring the consistency of motion between the simulator rollout and the generated video. In addition, we unlock observation diversity beyond the available dataset via image-to-image editing and apply action-preserving video-to-video transfer to further augment appearance. We observe RoboCurate's generated data yield substantial relative improvements in success rates compared to using real data only, achieving +70.1% on GR-1 Tabletop (300 demos), +16.1% on DexMimicGen in the pre-training setup, and +179.9% in the challenging real-world ALLEX humanoid dexterous manipulation setting.

  12. DODO: Discrete OCR Diffusion Models

    Optical Character Recognition (OCR) is a fundamental task for digitizing information, serving as a critical bridge between visual data and textual understanding. While modern Vision-Language Models (VLM) have achieved high accuracy in this domain, they predominantly rely on autoregressive decoding, which becomes computationally expensive and slow for long documents as it requires a sequential forward pass for every generated token. We identify a key opportunity to overcome this bottleneck: unlike open-ended generation, OCR is a highly deterministic task where the visual input strictly dictates a unique output sequence, theoretically enabling efficient, parallel decoding via diffusion models. However, we show that existing masked diffusion models fail to harness this potential; those introduce structural instabilities that are benign in flexible tasks, like captioning, but catastrophic for the rigid, exact-match requirements of OCR. To bridge this gap, we introduce DODO, the first VLM to utilize block discrete diffusion and unlock its speedup potential for OCR. By decomposing generation into blocks, DODO mitigates the synchronization errors of global diffusion. Empirically, our method achieves near state-of-the-art accuracy while enabling up to 3x faster inference compared to autoregressive baselines.

  13. Anatomy of Agentic Memory: Taxonomy and Empirical Analysis of Evaluation and System Limitations

    Agentic memory systems enable large language model (LLM) agents to maintain state across long interactions, supporting long-horizon reasoning and personalization beyond fixed context windows. Despite rapid architectural development, the empirical foundations of these systems remain fragile: existing benchmarks are often underscaled, evaluation metrics are misaligned with semantic utility, performance varies significantly across backbone models, and system-level costs are frequently overlooked. This survey presents a structured analysis of agentic memory from both architectural and system perspectives. We first introduce a concise taxonomy of MAG systems based on four memory structures. Then, we analyze key pain points limiting current systems, including benchmark saturation effects, metric validity and judge sensitivity, backbone-dependent accuracy, and the latency and throughput overhead introduced by memory maintenance. By connecting the memory structure to empirical limitations, this survey clarifies why current agentic memory systems often underperform their theoretical promise and outlines directions for more reliable evaluation and scalable system design.

  14. K-Search: LLM Kernel Generation via Co-Evolving Intrinsic World Model

    Optimizing GPU kernels is critical for efficient modern machine learning systems yet remains challenging due to the complex interplay of design factors and rapid hardware evolution. Existing automated approaches typically treat Large Language Models (LLMs) merely as stochastic code generators within heuristic-guided evolutionary loops. These methods often struggle with complex kernels requiring coordinated, multi-step structural transformations, as they lack explicit planning capabilities and frequently discard promising strategies due to inefficient or incorrect intermediate implementations. To address this, we propose Search via Co-Evolving World Model and build K-Search based on this method. By replacing static search heuristics with a co-evolving world model, our framework leverages LLMs' prior domain knowledge to guide the search, actively exploring the optimization space. This approach explicitly decouples high-level algorithmic planning from low-level program instantiation, enabling the system to navigate non-monotonic optimization paths while remaining resilient to temporary implementation defects. We evaluate K-Search on diverse, complex kernels from FlashInfer, including GQA, MLA, and MoE kernels. Our results show that K-Search significantly outperforms state-of-the-art evolutionary search methods, achieving an average 2.10x improvement and up to a 14.3x gain on complex MoE kernels. On the GPUMode TriMul task, K-Search achieves state-of-the-art performance on H100, reaching 1030us and surpassing both prior evolution and human-designed solutions.

  15. Nacrith: Neural Lossless Compression via Ensemble Context Modeling and High-Precision CDF Coding

    We present Nacrith, a lossless compression system that combines a 135M-parameter transformer language model (SmolLM2-135M) with an ensemble of lightweight online predictors and a 32-bit arithmetic coder. Beyond the base LLM-plus-arithmetic-coding paradigm, Nacrith introduces several contributions: (1) a CDF precision upgrade from 2^16 to 2^24 that eliminates ~75% of quantization overhead caused by minimum-probability floors in large vocabularies; (2) a token-level N-gram model for fast local predictions; (3) an adaptive log-space bias head correcting per-document LLM errors via online gradient descent; (4) confidence-based LLM skip for accelerating highly predictable tokens; (5) a hybrid binary format (NC06) extending neural compression to arbitrary binary files--to our knowledge a first among LLM-based compressors; (6) a llama.cpp inference backend achieving ~7x faster single-token decode than PyTorch; (7) parallel multi-GPU compression across up to 8 workers; and (8) native KV cache sliding window reducing per-slide cost by ~37x. The system requires only ~500 MB of GGUF weights and ~1.2 GB VRAM per worker, running on consumer GPUs. On alice29.txt (Canterbury Corpus, 152 KB), Nacrith achieves 0.918 bits per byte (bpb)--outperforming gzip by 3.1x, bzip2 by 2.5x, CMIX v21 by 44%, and ts_zip by 20%, while compressing below the 0th-, 1st-, and 2nd-order byte-level Shannon entropy bounds. On enwik8 (100 MB), Nacrith achieves 0.9389 bpb (11.74%), surpassing ts_zip (~1.11 bpb) by 15% and FineZip (1.024 bpb) by 8% despite using a 60x smaller model with no fine-tuning. An out-of-distribution evaluation on a document published after the model's training cutoff confirms these gains are not memorization artifacts, achieving 0.723 bpb on unseen text.

Solidot(12)

  1. 太平洋向北冰洋的热输送过去二十年增至 1.5 倍

    根据发表在《JGR Oceans》期刊上的一项研究,过去 20 年从太平洋流入北冰洋“加拿大海盆”的海水的热输送量增至 1.5 倍。分析认为,除流入水温升高外,北冰洋海冰减少也进一步推高了水温。受全球变暖影响,北冰洋海冰正在减少,尤其是太平洋一侧的减幅较大。研究团队自 2000 年起,在阿拉斯加州巴罗角近海观测海水温度与流速,从太平洋经白令海峡的海水主要汇入该处。结果显示,流速未见变动趋势,但水温呈长期上升。海洋热输送量也呈增加趋势,在 2000年-2022 年间增至原来的 1.5 倍。基于卫星海表温度等数据,研究还发现热输送自 2010 年代后半期起急剧增加。海冰较少的年份热输送较多,海冰较多的年份热输送较少。海冰较少时,海水更易吸收日照导致水温上升,进而加速海冰融化,形成反馈效应。

  2. 松下电视将由创维接手

    在索尼之后,曾以等离子电视闻名的松下宣布其电视业务将由创维接手。从 4 月开始,欧洲和北美的松下电视销售业务移交给创维,双方还将在产品研发和生产方面进行合作,松下将专注于在日本的销售和高端机型的生产,借助其他地区的销售和低价产品的生产委托给外部,有助于提高正在下滑的电视业务的收益。在销售方面,日本市场仍由松下自己负责,而欧美则由创维负责。剩下的亚洲市场今后将讨论包括与创维合作在内的各个国家和地区的最佳措施。在等离子电视时代,松下一度占据近半市场份额,2010 年松下控制了 40.7% 的等离子面板市场份额,超过三星(33.7%)和LG(23.2%),但随着消费者日益对 LCD 电视感兴趣,松下于 2014 年 3 月停产等离子电视。日本公司如夏普、东芝、日立以及索尼都基本退出了电视市场。

  3. Firefox 148 释出,引入了 AI 关闭开关

    Mozilla 释出了 Firefox 148,引入了 AI 关闭开关,允许用户关闭所有 AI 功能,Mozilla 承诺未来的更新不会覆盖该设置。该开关位于 设置 > AI Controls 下。Mozilla 还允许用户最大限度退出数据收集,相关选项位于 设置 > 隐私设置 > Firefox 数据收集下。其它变化包括:集成 Trusted Types API 和 Sanitizer API 以遏制跨站脚本攻击(XSS),改进了 PDF 中屏幕阅读器对数学公式的兼容性;Firefox Backup on Windows 10;支持 WebGPU 的 Service Worker 等等。

  4. Ladybird 浏览器项目将在 AI 帮助下使用 Rust 语言

    Ladybird 浏览器项目宣布将在 AI 帮助下使用 Rust 语言。Ladybird 是非盈利组织 Ladybird Browser Initiative 开发的开源浏览器,计划在今年内发布一个 alpha 版本,2028 年发布正式版本,它最初使用的语言是 C++,开发者表示他们一直在寻找一种内存安全语言替代 C++,他们在 2024 年评估过 Rust,但因为它在 C++ 风格的面向对象编程(OOP)上表现不佳而放弃,但一年之后它还是决定采用 Rust,而 Firefox 和 Chromium 都已开始在其代码库中引入 Rust。Ladybird 将首先用 Rust 重写部分代码,第一个目标是 JavaScript 引擎 LibJS,开发者在 AI 辅助编程工具 Claude Code 和 Codex 帮助下完成了 2.5 万行的代码。Rust 将主要用于开发子系统,浏览器引擎仍然继续使用 C++ 开发。

  5. ASML 改进极紫外光源有望增加芯片产量

    荷兰 ASML 的研究人员改进了极紫外光刻(EUV)设备所使用的光源功率,有望在这个十年结束前将芯片产量提高 50%。研究人员找到了方法将极紫外光源功率从目前的 600 瓦提高至 1000 瓦。更大的功率意味着每小时可以生产更多芯片,有助于降低单个芯片的成本。芯片的制造方法类似照片打印,利用极紫外光照射涂有光刻胶的硅晶圆,使用更大的极紫外光源,芯片工厂所需的曝光时间更短。ASML 极紫外光刻机执行副总裁 Teun van Gogh 表示,到 2030 年极紫外光刻机每台机器每小时能处理约 330 片硅晶圆,而目前是 220 片。

  6. F-35 能被越狱安装第三方软件

    荷兰国防部副部长 Gijs Tuinman 透露,F-35 能被越狱安装第三方软件,就像以前的 iPhone。他没有透露多少越狱细节。F-35 战斗机包含了云端组件 ALIS/ODIN network,它除了用于处理软件更新和后勤数据外,还被用于在执行任务前上传高度敏感的任务数据,在任务结束后下载情报等数据。采购 F-35 战斗机的美国盟友中,只有以色列允许安装自己开发的软件,允许在 ALIS/ODIN network 之外操作战斗机。其它国家的 F-35 高度依赖于美国的维护和后勤保障体系,因此越狱可能会导致美国停止维护,最终导致战斗机无法正常工作。

  7. Linux 7.0-rc1 释出

    Linus Torvalds 在内核邮件列表上宣布释出 Linux 7.0-rc1。主要变化包括:支持英特尔即将推出的新 CPU Nova Lake 和 Diamond Rapids,以及 AMD Zen 6 CPU 及其下一代 GPU,高通 Snapdragon X2;增强文件系统,改进 exFAT 的顺序读取性能,EXT4 并发直接 I/O 写入性能;对 Rust 语言的支持不再是实验性质;等等。

  8. 雏鸡也存在 Bouba/Kiki 效应

    人类会将无意义的单词与形状联系起来,比如 bouba 会与圆滑形状联系起来,而 kiki 会与尖角形状联系起来,这种语言学现象被称为 Bouba/Kiki 效应。根据发表在《科学》期刊上的一项研究,对刚出生雏鸡的测试显示,小鸡也存在 Bouba/Kiki 效应:刚出生 3 日的小鸡和刚出生 1 日的小鸡在听到 kiki 声音之后会自发选择尖角形状,听到 bouba 声音之后会选择圆滑形状。这一发现表明可能存在某种匹配形状和声音的先天机制,并且普遍存在于不同物种中,其历史渊源可能比我们以为的古老得多。

  9. 脑腐是否是真的?

    浏览太多刺激大脑多巴胺的社媒内容是否会导致脑腐(Brain Rot)?根据多项研究,这可能是真的。研究显示,滚动浏览 TikTok、Instagram or YouTube Shorts 等平台上的短视频会影响注意力、记忆力和心理健康。有研究发现短视频使用量增加与认知能力下降和焦虑加剧相关。根据发表在《Translational Psychiatry》期刊上的一项研究,对逾 7000 名儿童的分析发现,屏幕使用时间越长,大脑部分区域的皮层厚度越小。皮层是负责高级思维、记忆和决策的大脑区域,它对控制成瘾行为也非常重要。另一项研究发现,如果儿童手机移除了社媒应用,但不限制他们使用手机,那么负面影响会显著减少。

  10. I2P 匿名网络遭遇来自 Kimwolf 僵尸网络的女巫攻击

    I2P 匿名网络在 2 月 3 日遭遇了来自 Kimwolf 物联网僵尸网络的女巫攻击(Sybil attack)。所谓女巫攻击是指攻击者通过创建女巫(Sybil)节点操控整个网络系统,破坏了系统的正常运行。I2P 去中心化匿名网络通常只有 1.5-2 万个活跃设备,但当天涌入的恶意节点多达 70 万个,恶意节点的数量是合法节点的 39 倍。Kimwolf 的主要 CC 指令控制服务器此前遭到了 Google 等公司的破坏,该僵尸网络的运营者在 Discord 上表示它尝试将 I2P 网络作为备用的 CC 基础设施,结果意外破坏了 I2P 网络。I2P 团队在 6 天后释出了 v2.11.0,加入了针对女巫攻击的缓解措施,默认启用了后量子加密算法 ML-KEM 和 X25519。

  11. 当AI成为生产资料,谈谈技术格局

    Nala Ginrut 写道: 当 AI 成为生产力基础设施时,我们是否仍然保有迁移能力与选择权?如果今天是窗口期,那么在窗口期内做出怎样的准备,才能避免在锁定期和收缩期中被动应对? 这里涉及一个概念,我称之为“技术格局”。它并不意味着对抗或拒绝平台,也不是强调自给自足,而是指在关键生产工具上,个体能够保留基本的迁移能力与选择空间。

  12. DNA 技术和家谱数据库破解 1982 年的谋杀案

    DNA 技术和家谱基因数据库再次帮助警方破解了一起陈年悬案。加州 Cloverdale 的 13 岁女孩 Sarah Geer 于 1982 年 5 月 23 日晚上离开朋友家后失踪,第二天早上一名消防员发现了她的尸体。她的死被定为谋杀,但因为技术限制,未能确定谋杀嫌疑人。这起案件被搁置了逾 40 年。FBI 根据 Sarah 身上收集的 DNA 以及家谱基因数据库判断凶手是四兄弟之一,调查人员对他们进行了监视,收集了丢弃的香烟,确定现年 64 岁的 James Unick 是凶手。在 Sarah 遇害近 44 年后,陪审团于 2 月 13 日裁定其谋杀罪名成立。当地检方在一份声明中表示,虽然 44 年的等待实在太久,但正义终得伸张。