DIGEST · 2026-02-22

OrangeBot.AI Digest — 2026-02-22

49 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. AWS won't discuss my bill, suspended my account, took $1,600, still no human
  2. Global Intelligence Crisis (www.citriniresearch.com)
  3. Loops is a federated, open-source TikTok (joinloops.org)
  4. I built Timeframe, our family e-paper dashboard (hawksley.org)
  5. Show HN: Local-First Linux MicroVMs for macOS (shuru.run)
  6. Git's Magic Files (nesbitt.io)
  7. Fix your tools (ochagavia.nl)
  8. Xweather Live – Interactive global vector weather map (live.xweather.com)
  9. We hid backdoors in ~40MB binaries and asked AI + Ghidra to find them (quesma.com)
  10. Iran students stage first large anti-government protests since deadly crackdown (www.bbc.com)
  11. Man accidentally gains control of 7k robot vacuums (www.popsci.com)
  12. Attention Media ≠ Social Networks (susam.net)
  13. What is a database transaction? (planetscale.com)
  14. Back to FreeBSD: Part 1 (hypha.pub)
  15. How Taalas “prints” LLM onto a chip? (www.anuragk.com)

GitHub Trending(9)

  1. huggingface / skills
  2. vxcontrol / pentagi

    ✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks

  3. anthropics / claude-code

    Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.

  4. x1xhlol / system-prompts-and-models-of-ai-tools

    FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models

  5. Stremio / stremio-web

    Stremio - Freedom to Stream

  6. OpenBB-finance / OpenBB

    Financial data platform for analysts, quants and AI agents.

  7. cloudflare / agents

    Build and deploy AI Agents on Cloudflare

  8. abhigyanpatwari / GitNexus

    GitNexus: The Zero-Server Code Intelligence Engine - GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration

  9. stan-smith / FossFLOW

    Make beautiful isometric infrastructure diagrams

Hugging Face(15)

  1. SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

    Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.

  2. Mobile-Agent-v3.5: Multi-platform Fundamental GUI Agents

    The paper introduces GUI-Owl-1.5, the latest native GUI agent model that features instruct/thinking variants in multiple sizes (2B/4B/8B/32B/235B) and supports a range of platforms (desktop, mobile, browser, and more) to enable cloud-edge collaboration and real-time interaction. GUI-Owl-1.5 achieves state-of-the-art results on more than 20+ GUI benchmarks on open-source models: (1) on GUI automation tasks, it obtains 56.5 on OSWorld, 71.6 on AndroidWorld, and 48.4 on WebArena; (2) on grounding tasks, it obtains 80.3 on ScreenSpotPro; (3) on tool-calling tasks, it obtains 47.6 on OSWorld-MCP, and 46.8 on MobileWorld; (4) on memory and knowledge tasks, it obtains 75.5 on GUI-Knowledge Bench. GUI-Owl-1.5 incorporates several key innovations: (1) Hybird Data Flywheel: we construct the data pipeline for UI understanding and trajectory generation based on a combination of simulated environments and cloud-based sandbox environments, in order to improve the efficiency and quality of data collection. (2) Unified Enhancement of Agent Capabilities: we use a unified thought-synthesis pipeline to enhance the model's reasoning capabilities, while placing particular emphasis on improving key agent abilities, including Tool/MCP use, memory and multi-agent adaptation; (3) Multi-platform Environment RL Scaling: We propose a new environment RL algorithm, MRPO, to address the challenges of multi-platform conflicts and the low training efficiency of long-horizon tasks. The GUI-Owl-1.5 models are open-sourced, and an online cloud-sandbox demo is available at https://github.com/X-PLUG/MobileAgent.

  3. Unified Latents (UL): How to train your latents

    We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model. By linking the encoder's output noise to the prior's minimum noise level, we obtain a simple training objective that provides a tight upper bound on the latent bitrate. On ImageNet-512, our approach achieves competitive FID of 1.4, with high reconstruction quality (PSNR) while requiring fewer training FLOPs than models trained on Stable Diffusion latents. On Kinetics-600, we set a new state-of-the-art FVD of 1.3.

  4. Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5

    To understand and identify the unprecedented risks posed by rapidly advancing artificial intelligence (AI) models, Frontier AI Risk Management Framework in Practice presents a comprehensive assessment of their frontier risks. As Large Language Models (LLMs) general capabilities rapidly evolve and the proliferation of agentic AI, this version of the risk analysis technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R\&D, and self-replication. Specifically, we introduce more complex scenarios for cyber offense. For persuasion and manipulation, we evaluate the risk of LLM-to-LLM persuasion on newly released LLMs. For strategic deception and scheming, we add the new experiment with respect to emergent misalignment. For uncontrolled AI R\&D, we focus on the ``mis-evolution'' of agents as they autonomously expand their memory substrates and toolsets. Besides, we also monitor and evaluate the safety performance of OpenClaw during the interaction on the Moltbook. For self-replication, we introduce a new resource-constrained scenario. More importantly, we propose and validate a series of robust mitigation strategies to address these emerging threats, providing a preliminary technical and actionable pathway for the secure deployment of frontier AI. This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.

  5. Arcee Trinity Large Technical Report

    We present the technical report for Arcee Trinity Large, a sparse Mixture-of-Experts model with 400B total parameters and 13B activated per token. Additionally, we report on Trinity Nano and Trinity Mini, with Trinity Nano having 6B total parameters with 1B activated per token, Trinity Mini having 26B total parameters with 3B activated per token. The models' modern architecture includes interleaved local and global attention, gated attention, depth-scaled sandwich norm, and sigmoid routing for Mixture-of-Experts. For Trinity Large, we also introduce a new MoE load balancing strategy titled Soft-clamped Momentum Expert Bias Updates (SMEBU). We train the models using the Muon optimizer. All three models completed training with zero loss spikes. Trinity Nano and Trinity Mini were pre-trained on 10 trillion tokens, and Trinity Large was pre-trained on 17 trillion tokens. The model checkpoints are available at https://huggingface.co/arcee-ai.

  6. Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

    LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

  7. "What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing

    Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.

  8. Computer-Using World Model

    Agents operating in complex software environments benefit from reasoning about the consequences of their actions, as even a single incorrect user interface (UI) operation can derail long, artifact-preserving workflows. This challenge is particularly acute for computer-using scenarios, where real execution does not support counterfactual exploration, making large-scale trial-and-error learning and planning impractical despite the environment being fully digital and deterministic. We introduce the Computer-Using World Model (CUWM), a world model for desktop software that predicts the next UI state given the current state and a candidate action. CUWM adopts a two-stage factorization of UI dynamics: it first predicts a textual description of agent-relevant state changes, and then realizes these changes visually to synthesize the next screenshot. CUWM is trained on offline UI transitions collected from agents interacting with real Microsoft Office applications, and further refined with a lightweight reinforcement learning stage that aligns textual transition predictions with the structural requirements of computer-using environments. We evaluate CUWM via test-time action search, where a frozen agent uses the world model to simulate and compare candidate actions before execution. Across a range of Office tasks, world-model-guided test-time scaling improves decision quality and execution robustness.

  9. DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers

    Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to 3.52times and 3.2times speedup on FLUX-1.Dev and Wan 2.1, respectively, without compromising the generation quality and prompt adherence.

  10. TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

    Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).

  11. On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking

    We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task. Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics. While prior work has identified that individual neurons learn single-frequency Fourier features and phase alignment, it does not fully explain how these features combine into a global solution. We bridge this gap by formalizing a diversification condition that emerges during training when overparametrized, consisting of two parts: phase symmetry and frequency diversification. We prove that these properties allow the network to collectively approximate a flawed indicator function on the correct logic for the modular addition task. While individual neurons produce noisy signals, the phase symmetry enables a majority-voting scheme that cancels out noise, allowing the network to robustly identify the correct sum. Furthermore, we explain the emergence of these features under random initialization via a lottery ticket mechanism. Our gradient flow analysis proves that frequencies compete within each neuron, with the "winner" determined by its initial spectral magnitude and phase alignment. From a technical standpoint, we provide a rigorous characterization of the layer-wise phase coupling dynamics and formalize the competitive landscape using the ODE comparison lemma. Finally, we use these insights to demystify grokking, characterizing it as a three-stage process involving memorization followed by two generalization phases, driven by the competition between loss minimization and weight decay.

  12. ArXiv-to-Model: A Practical Study of Scientific LM Training

    While frontier large language models demonstrate strong reasoning and mathematical capabilities, the practical process of training domain-specialized scientific language models from raw sources remains under-documented. In this work, we present a detailed case study of training a 1.36B-parameter scientific language model directly from raw arXiv LaTeX sources spanning mathematics, computer science, and theoretical physics. We describe an end-to-end pipeline covering metadata filtering, archive validation, LaTeX extraction, text normalization, domain-aware tokenization, and dense transformer training under constrained compute (2xA100 GPUs). Through 24 experimental runs, we analyze training stability, scaling behavior, data yield losses, and infrastructure bottlenecks. Our findings highlight how preprocessing decisions significantly affect usable token volume, how tokenization impacts symbolic stability, and how storage and I/O constraints can rival compute as limiting factors. We further analyze convergence dynamics and show stable training behavior in a data-rich regime (52B pretraining tokens). Rather than proposing a novel architecture, this work provides an engineering-grounded, transparent account of training a small scientific language model from scratch. We hope these insights support researchers operating under moderate compute budgets who seek to build domain-specialized models.

  13. 2Mamba2Furious: Linear in Complexity, Competitive in Accuracy

    Linear attention transformers have become a strong alternative to softmax attention due to their efficiency. However, linear attention tends to be less expressive and results in reduced accuracy compared to softmax attention. To bridge the accuracy gap between softmax attention and linear attention, we manipulate Mamba-2, a very strong linear attention variant. We first simplify Mamba-2 down to its most fundamental and important components, evaluating which specific choices make it most accurate. From this simplified Mamba variant (Mamba-2S), we improve the A-mask and increase the order of the hidden state, resulting in a method, which we call 2Mamba, that is nearly as accurate as softmax attention, yet much more memory efficient for long context lengths. We also investigate elements to Mamba-2 that help surpass softmax attention accuracy. Code is provided for all our experiments

  14. Discovering Multiagent Learning Algorithms with Large Language Models

    Much of the advancement of Multi-Agent Reinforcement Learning (MARL) in imperfect-information games has historically depended on manual iterative refinement of baselines. While foundational families like Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO) rest on solid theoretical ground, the design of their most effective variants often relies on human intuition to navigate a vast algorithmic design space. In this work, we propose the use of AlphaEvolve, an evolutionary coding agent powered by large language models, to automatically discover new multiagent learning algorithms. We demonstrate the generality of this framework by evolving novel variants for two distinct paradigms of game-theoretic learning. First, in the domain of iterative regret minimization, we evolve the logic governing regret accumulation and policy derivation, discovering a new algorithm, Volatility-Adaptive Discounted (VAD-)CFR. VAD-CFR employs novel, non-intuitive mechanisms-including volatility-sensitive discounting, consistency-enforced optimism, and a hard warm-start policy accumulation schedule-to outperform state-of-the-art baselines like Discounted Predictive CFR+. Second, in the regime of population based training algorithms, we evolve training-time and evaluation-time meta strategy solvers for PSRO, discovering a new variant, Smoothed Hybrid Optimistic Regret (SHOR-)PSRO. SHOR-PSRO introduces a hybrid meta-solver that linearly blends Optimistic Regret Matching with a smoothed, temperature-controlled distribution over best pure strategies. By dynamically annealing this blending factor and diversity bonuses during training, the algorithm automates the transition from population diversity to rigorous equilibrium finding, yielding superior empirical convergence compared to standard static meta-solvers.

  15. World Models for Policy Refinement in StarCraft II

    Large Language Models (LLMs) have recently shown strong reasoning and generalization capabilities, motivating their use as decision-making policies in complex environments. StarCraft II (SC2), with its massive state-action space and partial observability, is a challenging testbed. However, existing LLM-based SC2 agents primarily focus on improving the policy itself and overlook integrating a learnable, action-conditioned transition model into the decision loop. To bridge this gap, we propose StarWM, the first world model for SC2 that predicts future observations under partial observability. To facilitate learning SC2's hybrid dynamics, we introduce a structured textual representation that factorizes observations into five semantic modules, and construct SC2-Dynamics-50k, the first instruction-tuning dataset for SC2 dynamics prediction. We further develop a multi-dimensional offline evaluation framework for predicted structured observations. Offline results show StarWM's substantial gains over zero-shot baselines, including nearly 60% improvements in resource prediction accuracy and self-side macro-situation consistency. Finally, we propose StarWM-Agent, a world-model-augmented decision system that integrates StarWM into a Generate--Simulate--Refine decision loop for foresight-driven policy refinement. Online evaluation against SC2's built-in AI demonstrates consistent improvements, yielding win-rate gains of 30%, 15%, and 30% against Hard (LV5), Harder (LV6), and VeryHard (LV7), respectively, alongside improved macro-management stability and tactical risk assessment.

Solidot(10)

  1. 当AI成为生产资料,谈谈技术格局

    Nala Ginrut 写道: 当 AI 成为生产力基础设施时,我们是否仍然保有迁移能力与选择权?如果今天是窗口期,那么在窗口期内做出怎样的准备,才能避免在锁定期和收缩期中被动应对? 这里涉及一个概念,我称之为“技术格局”。它并不意味着对抗或拒绝平台,也不是强调自给自足,而是指在关键生产工具上,个体能够保留基本的迁移能力与选择空间。

  2. DNA 技术和家谱数据库破解 1982 年的谋杀案

    DNA 技术和家谱基因数据库再次帮助警方破解了一起陈年悬案。加州 Cloverdale 的 13 岁女孩 Sarah Geer 于 1982 年 5 月 23 日晚上离开朋友家后失踪,第二天早上一名消防员发现了她的尸体。她的死被定为谋杀,但因为技术限制,未能确定谋杀嫌疑人。这起案件被搁置了逾 40 年。FBI 根据 Sarah 身上收集的 DNA 以及家谱基因数据库判断凶手是四兄弟之一,调查人员对他们进行了监视,收集了丢弃的香烟,确定现年 64 岁的 James Unick 是凶手。在莎拉 Sarah 遇害近 44 年后,陪审团于 2 月 13 日裁定其谋杀罪名成立。当地检方在一份声明中表示,虽然 44 年的等待实在太久,但正义终得伸张。

  3. 微软游戏业务高管离职,接替者来自 AI 部门

    微软 Xbox 和游戏业务 CEO Phil Spencer 在公司工作 38 年之后离职,被广泛视为其接任者的 Xbox 总裁 Sarah Bond 也已辞职,游戏业务的新 CEO 将是负责 CoreAI 产品的 Asha Sharma。Spencer 是在 2014 年 3 月被任命为 Xbox 负责人,他任内推出了游戏订阅服务 Xbox Game Pass,最为人所知的事情可能是以 690 亿美元收购动视暴雪。他还收购了一系列游戏工作室,其中包括 2020 年以 75 亿美元收购 Bethesda 母公司 ZeniMax,完全控制了著名的游戏 IP 如 《辐射》和《上古卷轴》(Bethesda)、《毁灭战士》和《雷神之锤》(id Software)。

  4. NASA 计划 3 月 6 日执行 Artemis II 载人绕月任务

    NASA 计划于 3 月 6 日执行 Artemis II 载人绕月任务。执行该任务的登月火箭 Space Launch System (SLS)已经竖立在佛罗里达肯尼迪太空中心的发射台上。NASA 官员将在下周对其进行为期数天的飞行准备评估,确保火箭各个方面都准备就绪。本月早些时候 SLS 在首次测试火箭燃料加注时遭遇了液氢泄漏问题,官员称在更换部分密封件之后该问题看起来已经解决了。

  5. 维基百科屏蔽 Archive.today

    存档网站 Archive.today 在其 CAPTCHA 验证页面嵌入脚本对博主 Jani Patokallio 的个人博客 Gyrovague 发动 DDoS 攻击之后,维基百科正式决定将 Archive.today 以及相关域名 archive.is、archive.ph、archive.fo、archive.li、archive.md 和 archive.vn 加入到黑名单,开始着手在维基百科条目中移除 Archive.today 的链接。Archive.today 被广泛用于绕过付费墙,维基百科有 40 万条目包含了逾 69.5 万 Archive.today 链接,这些链接将被替换为 Internet Archive、Ghostarchive 或 Megalodon 的存档链接,或者原始文章链接。Patokallio 在澳大利亚工作,他是芬兰人,Archive.today 的运营者针对芬兰 IP 的 CAPTCHA 验证页面嵌入脚本对其博客发动攻击,至今该脚本仍然存在,对 Patokallio 的 DDoS 攻击仍然在进行之中。

  6. 在是否保持 Android 开放上 Google 口惠而实不至

    Google 去年 8 月宣布将从 2026 年 9 月起强制执行应用开发者身份验证政策,禁止在 Android 设备上安装未验证身份的开发者的应用。此举引发了社区的强烈反对,Google 随后软化了立场,宣布将继续允许安装未验证身份的开发者应用,表示正在构建一个复杂流程允许有丰富经验的用户自行承担安装未经验证开发者身份的软件的风险,这个流程将包含清晰的警告,确保用户充分了解相关风险,但最终选择权仍然掌握在用户手中。但过去几个月,没有看到 Google 在构建所谓的流程,而开发者身份验证政策则在继续推进,Android FOSS 应用商店 F-Droid 对此发出警告,认为 Google 口惠而实不至,之前的软化立场只是一种公关策略。

  7. 特朗普将下令公开外星人和 UFO 相关文件

    美国总统特朗普(Donald Trump)表示他将命令包括国防部在内的机构公开外星人和 UFO 相关文件。此举被广泛视为是转移公众对爱泼斯坦文件(Epstein files)的注意力。特朗普表示他不知道外星人是否真实存在。上周美国前总统奥巴马(Barack Obama)接受 Brian Tyler Cohen 采访时表示他相信外星人真实存在,但他没有见过外星人,外星人也没有关在 51 区,51 区也不存在秘密的地下设施,除非真的存在某种巨大阴谋,阴谋者甚至向美国总统隐瞒了外星人的存在。

  8. FBI 线人协助运营了暗网毒品市场 Incognito

    在暗网毒品市场 Incognito 管理员 Lin Rui-Siang 的量刑听证会上,他的辩护律师首次披露,Incognito 的另一名管理员是 FBI 的线人,而这位管理员负责了 Incognito 大部分的毒品交易,Lin 坚称他主要负责代码和网站的技术基础设施。Lin 在狱中接受采访时表示,未披露名字的 FBI 线人是 Incognito 的正式合伙人,负责了网站大部分的管理工作,包括解决纠纷,允许哪些卖家出售毒品哪些卖家将被剔除。Lin 称是 FBI 在运营这个毒品交易网站。检方则否认该线人是 Lin 的同级,称线人是其下属不具有平等关系,否认政府运营该网站。

  9. Mozilla 建议 Windows 7/8/8.1 用户切换到 Linux

    Mozilla 称,2023 年 7 月释出的 Firefox 115 ESR 是 Firefox 支持 Windows 7/8/8.1 的最后一个版本,它将在 2026 年 2 月底为 Firefox 115 ESR 释出最后一个安全更新,之后将停止对它的支持。它建议 Windows 7/8/8.1 用户升级到 Windows 10 或更新版本,如果他们的 PC 由于微软设定的限制无法升级到 Windows 10 或 11,Mozilla 建议用户切换到 Linux 发行版。

  10. 家猫的癌症基因组

    癌症是家猫常见死因,但家猫的癌症基因组却知之甚少。根据本周《科学》期刊上的一项研究,科学家对 13 种不同类型猫科癌症的 493 份样本以及相匹配的健康对照组织进行了癌症基因测序。家猫与人类不仅有着相同的生活环境,且常与人类主人罹患相同的非癌合并症(如糖尿病),这使得家猫成为肿瘤研究中重要但未尽其用的资源。研究人员在将近 1000 个人类癌症基因与其对应的猫科动物基因进行比对后发现,这两个物种中存在一些流行程度相似的致癌基因如 TP53。研究人员还在猫的致癌基因组中发现了癌症驱动基因、肿瘤易感基因以及存在某些病毒序列的证据。