Weekly Digest — 2026-W06
131 unique stories (2026-02-02 → 2026-02-08), aggregated across 8 sources.
Hacker News(42)
- xAI joins SpaceX (www.spacex.com)
- Anki ownership transferred to AnkiHub (forums.ankiweb.net)
- The Codex App (openai.com)
- Hacking Moltbook (www.wiz.io)
- Todd C. Miller – Sudo maintainer for over 30 years (www.millert.dev)
- Ask HN: Who is hiring? (February 2026)
- Lessons Learned Shipping 500 Units of My First Hardware Product (www.simonberens.com)
- 221 Cannon is Not For Sale (fredbenenson.com)
- Xcode 26.3 – Developers can leverage coding agents directly in Xcode (www.apple.com)
- X offices raided in France (apnews.com)
- Deno Sandbox (deno.com)
- France dumps Zoom and Teams as Europe seeks digital autonomy from the US (apnews.com)
GitHub Trending(27)
- thedotmack / claude-mem
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
- ThePrimeagen / 99
Neovim AI agent done right
- termux / termux-app
Termux - a terminal emulator application for Android OS extendible by variety of packages.
- pedramamini / Maestro
Agent Orchestration Command Center
- netbirdio / netbird
Connect your devices into a secure WireGuard®-based overlay network with SSO, MFA and granular access controls.
- OpenBMB / ChatDev
ChatDev 2.0: Dev All through LLM-powered Multi-Agent Collaboration
- masoncl / review-prompts
AI review prompts
- openai / skills
Skills Catalog for Codex
- automazeio / ccpm
Project management system for Claude Code using GitHub Issues and Git worktrees for parallel agent execution.
- obra / superpowers
An agentic skills framework & software development methodology that works.
- virattt / dexter
An autonomous agent for deep financial research
- disler / claude-code-hooks-mastery
Master Claude Code Hooks
Hugging Face(31)
- ASTRA: Automated Synthesis of agentic Trajectories and Reinforcement Arenas
Large language models (LLMs) are increasingly used as tool-augmented agents for multi-step decision making, yet training robust tool-using agents remains challenging. Existing methods still require manual intervention, depend on non-verifiable simulated environments, rely exclusively on either supervised fine-tuning (SFT) or reinforcement learning (RL), and struggle with stable long-horizon, multi-turn learning. To address these challenges, we introduce ASTRA, a fully automated end-to-end framework for training tool-augmented language model agents via scalable data synthesis and verifiable reinforcement learning. ASTRA integrates two complementary components. First, a pipeline that leverages the static topology of tool-call graphs synthesizes diverse, structurally grounded trajectories, instilling broad and transferable tool-use competence. Second, an environment synthesis framework that captures the rich, compositional topology of human semantic reasoning converts decomposed question-answer traces into independent, code-executable, and rule-verifiable environments, enabling deterministic multi-turn RL. Based on this method, we develop a unified training methodology that integrates SFT with online RL using trajectory-level rewards to balance task completion and interaction efficiency. Experiments on multiple agentic tool-use benchmarks demonstrate that ASTRA-trained models achieve state-of-the-art performance at comparable scales, approaching closed-source systems while preserving core reasoning ability. We release the full pipelines, environments, and trained models at https://github.com/LianjiaTech/astra.
- Quartet II: Accurate LLM Pre-Training in NVFP4 by Improved Unbiased Gradient Estimation
The NVFP4 lower-precision format, supported in hardware by NVIDIA Blackwell GPUs, promises to allow, for the first time, end-to-end fully-quantized pre-training of massive models such as LLMs. Yet, existing quantized training methods still sacrifice some of the representation capacity of this format in favor of more accurate unbiased quantized gradient estimation by stochastic rounding (SR), losing noticeable accuracy relative to standard FP16 and FP8 training. In this paper, improve the state of the art for quantized training in NVFP4 via a novel unbiased quantization routine for micro-scaled formats, called MS-EDEN, that has more than 2x lower quantization error than SR. We integrate it into a novel fully-NVFP4 quantization scheme for linear layers, called Quartet II. We show analytically that Quartet II achieves consistently better gradient estimation across all major matrix multiplications, both on the forward and on the backward passes. In addition, our proposal synergizes well with recent training improvements aimed specifically at NVFP4. We further validate Quartet II on end-to-end LLM training with up to 1.9B parameters on 38B tokens. We provide kernels for execution on NVIDIA Blackwell GPUs with up to 4.2x speedup over BF16. Our code is available at https://github.com/IST-DASLab/Quartet-II .
- THINKSAFE: Self-Generated Safety Alignment for Reasoning Models
Large reasoning models (LRMs) achieve remarkable performance by leveraging reinforcement learning (RL) on reasoning tasks to generate long chain-of-thought (CoT) reasoning. However, this over-optimization often prioritizes compliance, making models vulnerable to harmful prompts. To mitigate this safety degradation, recent approaches rely on external teacher distillation, yet this introduces a distributional discrepancy that degrades native reasoning. We propose ThinkSafe, a self-generated alignment framework that restores safety alignment without external teachers. Our key insight is that while compliance suppresses safety mechanisms, models often retain latent knowledge to identify harm. ThinkSafe unlocks this via lightweight refusal steering, guiding the model to generate in-distribution safety reasoning traces. Fine-tuning on these self-generated responses effectively realigns the model while minimizing distribution shift. Experiments on DeepSeek-R1-Distill and Qwen3 show ThinkSafe significantly improves safety while preserving reasoning proficiency. Notably, it achieves superior safety and comparable reasoning to GRPO, with significantly reduced computational cost. Code, models, and datasets are available at https://github.com/seanie12/ThinkSafe.git.
- Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text
Reinforcement Learning with Verifiable Rewards (RLVR) has become a cornerstone for unlocking complex reasoning in Large Language Models (LLMs). Yet, scaling up RL is bottlenecked by limited existing verifiable data, where improvements increasingly saturate over prolonged training. To overcome this, we propose Golden Goose, a simple trick to synthesize unlimited RLVR tasks from unverifiable internet text by constructing a multiple-choice question-answering version of the fill-in-the-middle task. Given a source text, we prompt an LLM to identify and mask key reasoning steps, then generate a set of diverse, plausible distractors. This enables us to leverage reasoning-rich unverifiable corpora typically excluded from prior RLVR data construction (e.g., science textbooks) to synthesize GooseReason-0.7M, a large-scale RLVR dataset with over 0.7 million tasks spanning mathematics, programming, and general scientific domains. Empirically, GooseReason effectively revives models saturated on existing RLVR data, yielding robust, sustained gains under continuous RL and achieving new state-of-the-art results for 1.5B and 4B-Instruct models across 15 diverse benchmarks. Finally, we deploy Golden Goose in a real-world setting, synthesizing RLVR tasks from raw FineWeb scrapes for the cybersecurity domain, where no prior RLVR data exists. Training Qwen3-4B-Instruct on the resulting data GooseReason-Cyber sets a new state-of-the-art in cybersecurity, surpassing a 7B domain-specialized model with extensive domain-specific pre-training and post-training. This highlights the potential of automatically scaling up RLVR data by exploiting abundant, reasoning-rich, unverifiable internet text.
- TTCS: Test-Time Curriculum Synthesis for Self-Evolving
Test-Time Training offers a promising way to improve the reasoning ability of large language models (LLMs) by adapting the model using only the test questions. However, existing methods struggle with difficult reasoning problems for two reasons: raw test questions are often too difficult to yield high-quality pseudo-labels, and the limited size of test sets makes continuous online updates prone to instability. To address these limitations, we propose TTCS, a co-evolving test-time training framework. Specifically, TTCS initializes two policies from the same pretrained model: a question synthesizer and a reasoning solver. These policies evolve through iterative optimization: the synthesizer generates progressively challenging question variants conditioned on the test questions, creating a structured curriculum tailored to the solver's current capability, while the solver updates itself using self-consistency rewards computed from multiple sampled responses on both original test and synthetic questions. Crucially, the solver's feedback guides the synthesizer to generate questions aligned with the model's current capability, and the generated question variants in turn stabilize the solver's test-time training. Experiments show that TTCS consistently strengthens the reasoning ability on challenging mathematical benchmarks and transfers to general-domain tasks across different LLM backbones, highlighting a scalable path towards dynamically constructing test-time curricula for self-evolving. Our code and implementation details are available at https://github.com/XMUDeepLIT/TTCS.
- Do Reasoning Models Enhance Embedding Models?
State-of-the-art embedding models are increasingly derived from decoder-only Large Language Model (LLM) backbones adapted via contrastive learning. Given the emergence of reasoning models trained via Reinforcement Learning with Verifiable Rewards (RLVR), a natural question arises: do enhanced reasoning translate to superior semantic representations when these models serve as embedding initializations? Contrary to expectation, our evaluation on MTEB and BRIGHT reveals a **null effect**: embedding models initialized from RLVR-tuned backbones yield no consistent performance advantage over their base counterparts when subjected to identical training recipes. To unpack this paradox, we introduce **H**ierarchical **R**epresentation **S**imilarity **A**nalysis (HRSA), a framework that decomposes similarity across representation, geometry, and function levels. HRSA reveals that while RLVR induces irreversible latent manifold's local geometry reorganization and reversible coordinate basis drift, it preserves the global manifold geometry and linear readout. Consequently, subsequent contrastive learning drives strong alignment between base- and reasoning-initialized models, a phenomenon we term **Manifold Realignment**. Empirically, our findings suggest that unlike Supervised Fine-Tuning (SFT), RLVR optimizes trajectories within an existing semantic landscape rather than fundamentally restructuring the landscape itself.
- Green-VLA: Staged Vision-Language-Action Model for Generalist Robots
We introduce Green-VLA, a staged Vision-Language-Action (VLA) framework for real-world deployment on the Green humanoid robot while maintaining generalization across diverse embodiments. Green-VLA follows a five stage curriculum: (L0) foundational VLMs, (L1) multimodal grounding, (R0) multi-embodiment pretraining, (R1) embodiment-specific adaptation, and (R2) reinforcement-learning (RL) policy alignment. We couple a scalable data-processing pipeline (3,000 hours of demonstrations) with temporal alignment and quality filtering, and use a unified, embodiment-aware action interface enabling a single policy to control humanoids, mobile manipulators, and fixed-base arms. At inference, the VLA controller is enhanced with episode-progress prediction, out-of-distribution detection, and joint-prediction-based guidance to improve safety and precise target selection. Experiments on Simpler BRIDGE WidowX and CALVIN ABC-D, as well as real-robot evaluations, show strong generalization and performance gains from RL alignment in success rate, robustness, and long-horizon efficiency.
- UniReason 1.0: A Unified Reasoning Framework for World Knowledge Aligned Image Generation and Editing
Unified multimodal models often struggle with complex synthesis tasks that demand deep reasoning, and typically treat text-to-image generation and image editing as isolated capabilities rather than interconnected reasoning steps. To address this, we propose UniReason, a unified framework that harmonizes these two tasks through a dual reasoning paradigm. We formulate generation as world knowledge-enhanced planning to inject implicit constraints, and leverage editing capabilities for fine-grained visual refinement to further correct visual errors via self-reflection. This approach unifies generation and editing within a shared representation, mirroring the human cognitive process of planning followed by refinement. We support this framework by systematically constructing a large-scale reasoning-centric dataset (~300k samples) covering five major knowledge domains (e.g., cultural commonsense, physics, etc.) for planning, alongside an agent-generated corpus for visual self-correction. Extensive experiments demonstrate that UniReason achieves advanced performance on reasoning-intensive benchmarks such as WISE, KrisBench and UniREditBench, while maintaining superior general synthesis capabilities.
- SWE-Universe: Scale Real-World Verifiable Environments to Millions
We propose SWE-Universe, a scalable and efficient framework for automatically constructing real-world software engineering (SWE) verifiable environments from GitHub pull requests (PRs). To overcome the prevalent challenges of automatic building, such as low production yield, weak verifiers, and prohibitive cost, our framework utilizes a building agent powered by an efficient custom-trained model. This agent employs iterative self-verification and in-loop hacking detection to ensure the reliable generation of high-fidelity, verifiable tasks. Using this method, we scale the number of real-world multilingual SWE environments to a million scale (807,693). We demonstrate the profound value of our environments through large-scale agentic mid-training and reinforcement learning. Finally, we applied this technique to Qwen3-Max-Thinking and achieved a score of 75.3% on SWE-Bench Verified. Our work provides both a critical resource and a robust methodology to advance the next generation of coding agents.
- PixelGen: Pixel Diffusion Beats Latent Diffusion with Perceptual Loss
Pixel diffusion generates images directly in pixel space in an end-to-end manner, avoiding the artifacts and bottlenecks introduced by VAEs in two-stage latent diffusion. However, it is challenging to optimize high-dimensional pixel manifolds that contain many perceptually irrelevant signals, leaving existing pixel diffusion methods lagging behind latent diffusion models. We propose PixelGen, a simple pixel diffusion framework with perceptual supervision. Instead of modeling the full image manifold, PixelGen introduces two complementary perceptual losses to guide diffusion model towards learning a more meaningful perceptual manifold. An LPIPS loss facilitates learning better local patterns, while a DINO-based perceptual loss strengthens global semantics. With perceptual supervision, PixelGen surpasses strong latent diffusion baselines. It achieves an FID of 5.11 on ImageNet-256 without classifier-free guidance using only 80 training epochs, and demonstrates favorable scaling performance on large-scale text-to-image generation with a GenEval score of 0.79. PixelGen requires no VAEs, no latent representations, and no auxiliary stages, providing a simpler yet more powerful generative paradigm. Codes are publicly available at https://github.com/Zehong-Ma/PixelGen.
- SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization
Direct preference optimization methods have emerged as a computationally efficient alternative to Reinforcement Learning from Human Feedback (RLHF) for aligning Large Language Models (LLMs). Latest approaches have streamlined the alignment process by deriving implicit reward functions, yet they often suffer from a critical objective mismatch: optimizing the relative margin between chosen and rejected responses does not guarantee the preservation of the chosen response's absolute likelihood. This can lead to ``unlearning'', where the model degrades the probability of high-quality outputs to satisfy margin constraints, and ``formatting collapse'' caused by the over-penalization of rejected sequences. In this work, we introduce SLIME (Stabilized Likelihood Implicit Margin Enforcement), a reference-free alignment objective designed to decouple preference learning from generation quality. SLIME incorporates a three-pronged objective: (1) an anchoring term to maximize the likelihood of preferred responses; (2) a stabilizing penalty that prevents the probabilities of rejected tokens from collapsing to zero; and (3) a dual-margin mechanism that combines hard and soft constraints for precise boundary shaping. Our results demonstrate that SLIME achieves superior performance compared to state-of-the-art baselines while maintaining higher generation stability.
- Good SFT Optimizes for SFT, Better SFT Prepares for Reinforcement Learning
Post-training of reasoning LLMs is a holistic process that typically consists of an offline SFT stage followed by an online reinforcement learning (RL) stage. However, SFT is often optimized in isolation to maximize SFT performance alone. We show that, after identical RL training, models initialized from stronger SFT checkpoints can significantly underperform those initialized from weaker ones. We attribute this to a mismatch typical in current SFT-RL pipelines: the distribution that generates the offline SFT data can differ substantially from the policy optimized during online RL, which learns from its own rollouts. We propose PEAR (Policy Evaluation-inspired Algorithm for Offline Learning Loss Re-weighting), an SFT-stage method that corrects this mismatch and better prepares the model for RL. PEAR uses importance sampling to reweight the SFT loss, with three variants operating at the token, block, and sequence levels. It can be used to augment standard SFT objectives and incurs little additional training overhead once probabilities for the offline data are collected. We conduct controlled experiments on verifiable reasoning games and mathematical reasoning tasks on Qwen 2.5 and 3 and DeepSeek-distilled models. PEAR consistently improves post-RL performance over canonical SFT, with pass at 8 gains up to a 14.6 percent on AIME2025. Our results suggest that PEAR is an effective step toward more holistic LLM post-training by designing and evaluating SFT with downstream RL in mind rather than in isolation.
Solidot(31)
- 太阳释放出 X8.11 级耀斑
几天前才出现的太阳黑子区域 AR4366 在 24 小时内释放出 17 个 M 级耀斑和 3 个 X 级耀斑,其中包括一个 X8.11 级耀斑。这是过去二十年最强耀斑之一,是当前 25 太阳周期的第三强的耀斑。太阳目前处于活跃期,AR4366 非常不稳定,意味着会爆发更多高强度耀斑。
- 最大动漫盗版网站被关,运营者被捕
日本反盗版组织 CODA(文化产品海外流通促进机构) 宣布,上海警方去年 11 月拘留了一名广西男子,该男子被控运营了最大的动漫盗版网站 BATO.TO。BATO.TO 不只是一个网站,它包含了 xbato.com、bato.to 和 mangapark.io 等 60 个网站。这名男子已获释,他承认运营了这些网站,未来将面临正式诉讼。警方已扣押该男子的电脑,还在继续调查,分析服务器确定更多运营者身份。在该男子拘留之后,BATO 网站仍然继续运营了一段时间,直到 1 月 19 日全部关闭。被侵权的日本出版商包括了角川集团、讲谈社、集英社、小学馆和史克威尔艾尼克斯。CODA 北京办事处应这些出版商要求向公安局提起刑事诉讼。它还寻求了腾讯旗下公司的合作。BATO 旗下网站的月访问量达到 3.5 亿次,从 2022 年 10 月到 2025 年 10 月总访问量 72 亿次。
- Blue Origin 专注于月球项目放弃亚轨道旅游
Blue Origin 宣布暂停 New Shepard 项目两年,但此举可能意味着其亚轨道太空旅游的永久终结。New Shepard 火箭和太空船自 2015 年投入使用至今共完成了 38 次发射,除一次外全部成功,将 98 人送入太空体验亚轨道飞行。为何 Blue Origin 要终止其成立至今持续时间最长的项目?CEO Dave Limp 表示要将人力和资源投入到载人登月项目上。Blue Origin 员工对此举也颇感意外,因为上一次亚轨道飞行是在 8 天前将六人送入太空,该公司还有 4 枚处于不同阶段的 New Shepard 火箭以及两艘正在建造中的太空船,它还在去年讨论过扩展发射场。然而该项目一直处于亏损状态,有逾 500 名员工投入在该项目,分散了其精力和资源。
- 过去四个月比特币从峰值下跌了四成
2025 年 10 月比特币创下了 123,742 美元的记录,但四个月后跌至 76000 美元,币值从峰值下跌了四成。彭博认为这一波跌势不是出于恐慌而是买家、动能和信心的缺失引起的。下跌没有明显的导火索,纯粹是需求减弱、流动性减少,其价值与更广泛的市场无关联。即使最近几周黄金白银价格剧烈波动,加密货币也未出现任何震荡。比特币 1 月下跌近 11%,连续第四个月下跌——这是自 2018 年以来最长的连跌纪录。社交媒体上也对止跌缺乏乐观情绪。主流买家的信心正在减弱,许多买家在高价买入后都处于亏损状态。
- MRI 扫描显示锻炼让大脑看起来更年轻
根据发表在《Journal of Sport and Health Science》期刊上的一项研究,坚持规律的有氧运动有助于保持大脑年轻。研究显示,坚持一整年有氧运动的成年人大脑看起来比那些没有改变活动习惯的人年轻近一岁。研究使用 MRI 扫描结果估算大脑的生物学年龄。130 名年龄在 26-58 岁之间的健康成年人参与了研究。参与者被随机分配到中等至高强度有氧运动组或常规对照组。运动组的参与者每周在实验室完成两次 60 分钟的监督锻炼,在家中进行额外的锻炼以达到每周约 150 分钟的有氧运动量。研究人员在研究开始时和 12 个月后分别使用 MRI 扫描测量了大脑结构,通过峰值摄氧量(VO2peak)评估了心肺功能。一年后两组有明显差异:运动组参与者大脑年龄显著下降,对照组大脑年龄略有上升。平均而言运动组大脑年龄下降了约 0.6 岁,对照组大脑年龄增加了约 0.35 岁——但该结果不具有统计显著性。两组的大脑年龄相差了一岁。
- 中国计划 2-3 年内向日地 L5 点发射羲和二号
中国计划 2028 年至 2029 年间择机向日地 L5 点发射“羲和二号”。羲和是《山海经》中的太阳之母,是《楚辞》中驾车控制太阳东升西落的神,也是中国古代观测天象与制定历法的官职。2021 年 10 月,我国成功发射首颗太阳探测科学技术试验卫星“羲和号”,正式步入空间探日时代。近 5 年后,“羲和二号”正式启动。“羲和号”环绕地球运行,“羲和二号”则不是。“羲和号”科学与应用系统总设计师、南京大学天文与空间科学学院教授李川介绍,太阳和地球有 5 处引力平衡点,L1、L2、L3 在日地连线上,L4、L5 则在地球环绕太阳运行的轨道上,各自与太阳、地球构成边长约 1.5 亿公里的等边三角形,如果将地球公转方向视作“前方”,L5 在地球的“身后”。“截至目前,人类发射的太阳探测器已有70多颗,绝大多数分布在日地连线上,少数环绕太阳运行,还没有探测器在日地 L5 点驻留。因此,‘羲和二号’将给人类研究太阳提供一个全新的‘旁观者’视角。”李川说,身处引力平衡点,“羲和二号”无需消耗过多能量就能维持轨道稳定,设计寿命长达 7 年。
- 超加工食品应视为香烟而非食品
根据发表在《Milbank Quarterly》期刊上的一项研究,哈佛、杜克和密歇根大学的研究人员认为,超加工食品(Ultra-processed foods)与香烟的相似之处远多于与水果或蔬菜的相似之处,需要更严格的监管。超加工食品是经过工业化生产、通常使用乳化剂或人工色素和香精的食品,如软饮料、薯片和饼干。研究人员称,超加工食品和香烟的生产过程存在相似之处,制造商都在努力优化产品“剂量”以及对人体奖赏通路的作用速度。宣传食品“低脂”或“无糖”都是在误导消费者,类似 1950 年代宣传香烟的过滤嘴是一种保护性创新,实际上几乎没有任何实质性益处。研究人员认为应该借鉴烟草管理去监管超加工食品。
- 西班牙计划禁止 16 岁以下儿童使用社交媒体
西班牙首相 Pedro Sanchez 周二表示,计划禁止 16 岁以下未成年人使用社交媒体,社交平台需要引入年龄验证系统。他表示要保护儿童远离数字狂野西部。澳大利亚于去年 12 月成为首个禁止 16 岁以下儿童使用社交媒体的国家,英法等国正在考虑采取类似年龄限制措施。Sanchez 称西班牙将于下周提出一项法案,追究社交媒体高管对非法和仇恨言论内容的责任,将算法操纵和放大非法内容定为犯罪行为。
- 巴黎检方突击搜查 X 在法办公室
巴黎检方突击搜查 X 在法办公室。执行搜查的是网络犯罪部门,欧洲刑警组织协助。搜查与 2025 年 1 月启动的调查相关,这次调查涉及对 X 算法及其推荐内容的投诉。巴黎检方还传唤了马斯克(Elon Musk)以及 X 前 CEO Linda Yaccarino,要求 4 月出席听证会。检方在声明中称,X 平台流传深度伪造的色情视频以及否认纳粹大屠杀的内容。检方还宣布将退出 X 平台,将通过 LinkedIn 和 Instagram 与外界沟通。
- 中国禁止隐藏式车门把
工信部发布了新的强制性安全标准《汽车车门把手安全技术要求》,禁止电动汽车使用隐藏式门把手,成为世界上首个禁止这种设计的国家。这种特斯拉推广的设计因一系列致命事件而面临全球监管机构的审查。新规定要求在中国销售的汽车必须配备机械释放车门外把手。新规将于 2027 年 1 月 1 日起开始实施。已获得型式批准的车型,应于 2029 年 1 月前修改其设计以符合要求。在此之前,中国国内发生多起引发高度关注的事故,其中包括两起小米电动汽车起火事故。事故中车门疑似因断电而无法打开,造成车内人员既无法逃生,也无法获救,最终身亡。
- 乌克兰和 SpaceX 合作阻止俄罗斯无人机使用 Starlink 发动攻击
乌克兰和 SpaceX 最近合作阻止俄罗斯无人机使用 Starlink 发动攻击。乌克兰国防部表示,乌克兰的 Starlink 用户在不久之后将被要求登记其终端,未来经过验证和登记的 Starlink 终端将被加入到白名单,能继续在乌克兰境内访问卫星互联网,未登记的终端将被断开连接。俄罗斯通过黑市交易获得了 Starlink 终端,它的 Molniya-2 无人机的攻击型号和侦察型号通过配备 Starlink 实现超视距的控制和数据传输,在更远的距离上进行精确打击。Molniya-2 被发现使用了 F8 迷你 PC ,运行正版授权的 Windows 11 操作系统。
- 因内存价格飙升树莓派再次涨价
AI 热导致内存和固态硬盘价格不断上涨,也迫使 PC 组装厂商不断调整价格应对主要零部件价格的上涨。树莓派宣布了两个月内的第二次价格上调。所有配备 2GB 以上内存的 Raspberry Pi 4 和 Raspberry Pi 5 都将涨价。2GB 内存版本上涨 10 美元,4GB 内存上涨 15 美元,8GB 内存上涨 30 美元,16GB 内存版本将大幅上涨 60 美元。16GB 版本的 Pi 5 如今售价高达 205 美元,而树莓派之类的单板电脑曾以低价著称。