DIGEST · 2026-02-19

OrangeBot.AI Digest — 2026-02-19

47 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. We're no longer attracting top talent: the brain drain killing American science (www.theguardian.com)
  2. California's new bill requires DOJ-approved 3D printers that report themselves (blog.adafruit.com)
  3. IRS lost 40% of IT staff, 80% of tech leaders in 'efficiency' shakeup (www.theregister.com)
  4. South Korean ex president Yoon Suk Yeol jailed for life for leading insurrection (www.theguardian.com)
  5. DOGE Bro's Grant Review Process Was Literally Just Asking ChatGPT 'Is This DEI?' (www.techdirt.com)
  6. AI makes you boring (www.marginalia.nu)
  7. Show HN: Micasa – track your house from the terminal (micasa.dev)
  8. Gemini 3.1 Pro (deepmind.google)
  9. Gemini 3.1 Pro (blog.google)
  10. America vs. Singapore: You can't save your way out of economic shocks (www.governance.fyi)
  11. DOGE Track (dogetrack.info)
  12. Pebble Production: February Update (repebble.com)
  13. Paged Out Issue #8 [pdf] (pagedout.institute)
  14. Don't Trust the Salt: AI Summarization, Multilingual Safety, and LLM Guardrails (royapakzad.substack.com)
  15. European Tech Alternatives (eutechmap.com)

GitHub Trending(8)

  1. obra / superpowers

    An agentic skills framework & software development methodology that works.

  2. RichardAtCT / claude-code-telegram

    A powerful Telegram bot that provides remote access to Claude Code, enabling developers to interact with their projects from anywhere with full AI assistance and session persistence.

  3. open-mercato / open-mercato

    AI‑supportive CRM / ERP foundation framework — built to power R&D, new processes, operations, and growth. It’s modular, extensible, and designed for teams that want strong defaults with room to customize everything. Better than Django, Retool and other alternatives - and Enterprise Grade!

  4. harvard-edge / cs249r_book

    Introduction to Machine Learning Systems

  5. HailToDodongo / pyrite64

    N64 Game-Engine and Editor using libdragon & tiny3d

  6. openclaw / openclaw

    Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

  7. freemocap / freemocap

    Free Motion Capture for Everyone 💀✨

  8. p-e-w / heretic

    Fully automatic censorship removal for language models

Hugging Face(15)

  1. SLA2: Sparse-Linear Attention with Learnable Routing and QAT

    Sparse-Linear Attention (SLA) combines sparse and linear attention to accelerate diffusion models and has shown strong performance in video generation. However, (i) SLA relies on a heuristic split that assigns computations to the sparse or linear branch based on attention-weight magnitude, which can be suboptimal. Additionally, (ii) after formally analyzing the attention error in SLA, we identify a mismatch between SLA and a direct decomposition into sparse and linear attention. We propose SLA2, which introduces (I) a learnable router that dynamically selects whether each attention computation should use sparse or linear attention, (II) a more faithful and direct sparse-linear attention formulation that uses a learnable ratio to combine the sparse and linear attention branches, and (III) a sparse + low-bit attention design, where low-bit attention is introduced via quantization-aware fine-tuning to reduce quantization error. Experiments show that on video diffusion models, SLA2 can achieve 97% attention sparsity and deliver an 18.6x attention speedup while preserving generation quality.

  2. RynnBrain: Open Embodied Foundation Models

    Despite rapid progress in multimodal foundation models, embodied intelligence community still lacks a unified, physically grounded foundation model that integrates perception, reasoning, and planning within real-world spatial-temporal dynamics. We introduce RynnBrain, an open-source spatiotemporal foundation model for embodied intelligence. RynnBrain strengthens four core capabilities in a unified framework: comprehensive egocentric understanding, diverse spatiotemporal localization, physically grounded reasoning, and physics-aware planning. The RynnBrain family comprises three foundation model scales (2B, 8B, and 30B-A3B MoE) and four post-trained variants tailored for downstream embodied tasks (i.e., RynnBrain-Nav, RynnBrain-Plan, and RynnBrain-VLA) or complex spatial reasoning tasks (i.e., RynnBrain-CoP). In terms of extensive evaluations on 20 embodied benchmarks and 8 general vision understanding benchmarks, our RynnBrain foundation models largely outperform existing embodied foundation models by a significant margin. The post-trained model suite further substantiates two key potentials of the RynnBrain foundation model: (i) enabling physically grounded reasoning and planning, and (ii) serving as a strong pretrained backbone that can be efficiently adapted to diverse embodied tasks.

  3. Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation

    Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.

  4. CADEvolve: Creating Realistic CAD via Program Evolution

    Computer-Aided Design (CAD) delivers rapid, editable modeling for engineering and manufacturing. Recent AI progress now makes full automation feasible for various CAD tasks. However, progress is bottlenecked by data: public corpora mostly contain sketch-extrude sequences, lack complex operations, multi-operation composition and design intent, and thus hinder effective fine-tuning. Attempts to bypass this with frozen VLMs often yield simple or invalid programs due to limited 3D grounding in current foundation models. We present CADEvolve, an evolution-based pipeline and dataset that starts from simple primitives and, via VLM-guided edits and validations, incrementally grows CAD programs toward industrial-grade complexity. The result is 8k complex parts expressed as executable CadQuery parametric generators. After multi-stage post-processing and augmentation, we obtain a unified dataset of 1.3m scripts paired with rendered geometry and exercising the full CadQuery operation set. A VLM fine-tuned on CADEvolve achieves state-of-the-art results on the Image2CAD task across the DeepCAD, Fusion 360, and MCB benchmarks.

  5. Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality

    Standard factuality evaluations of LLMs treat all errors alike, obscuring whether failures arise from missing knowledge (empty shelves) or from limited access to encoded facts (lost keys). We propose a behavioral framework that profiles factual knowledge at the level of facts rather than questions, characterizing each fact by whether it is encoded, and then by how accessible it is: cannot be recalled, can be directly recalled, or can only be recalled with inference-time computation (thinking). To support such profiling, we introduce WikiProfile, a new benchmark constructed via an automated pipeline with a prompted LLM grounded in web search. Across 4 million responses from 13 LLMs, we find that encoding is nearly saturated in frontier models on our benchmark, with GPT-5 and Gemini-3 encoding 95--98% of facts. However, recall remains a major bottleneck: many errors previously attributed to missing knowledge instead stem from failures to access it. These failures are systematic and disproportionately affect long-tail facts and reverse questions. Finally, we show that thinking improves recall and can recover a substantial fraction of failures, indicating that future gains may rely less on scaling and more on methods that improve how models utilize what they already encode.

  6. MAEB: Massive Audio Embedding Benchmark

    We introduce the Massive Audio Embedding Benchmark (MAEB), a large-scale benchmark covering 30 tasks across speech, music, environmental sounds, and cross-modal audio-text reasoning in 100+ languages. We evaluate 50+ models and find that no single model dominates across all tasks: contrastive audio-text models excel at environmental sound classification (e.g., ESC50) but score near random on multilingual speech tasks (e.g., SIB-FLEURS), while speech-pretrained models show the opposite pattern. Clustering remains challenging for all models, with even the best-performing model achieving only modest results. We observe that models excelling on acoustic understanding often perform poorly on linguistic tasks, and vice versa. We also show that the performance of audio encoders on MAEB correlates highly with their performance when used in audio large language models. MAEB is derived from MAEB+, a collection of 98 tasks. MAEB is designed to maintain task diversity while reducing evaluation cost, and it integrates into the MTEB ecosystem for unified evaluation across text, image, and audio modalities. We release MAEB and all 98 tasks along with code and a leaderboard at https://github.com/embeddings-benchmark/mteb.

  7. Towards a Science of AI Agent Reliability

    AI agents are increasingly deployed to execute important tasks. While rising accuracy scores on standard benchmarks suggest rapid progress, many agents still continue to fail in practice. This discrepancy highlights a fundamental limitation of current evaluations: compressing agent behavior into a single success metric obscures critical operational flaws. Notably, it ignores whether agents behave consistently across runs, withstand perturbations, fail predictably, or have bounded error severity. Grounded in safety-critical engineering, we provide a holistic performance profile by proposing twelve concrete metrics that decompose agent reliability along four key dimensions: consistency, robustness, predictability, and safety. Evaluating 14 agentic models across two complementary benchmarks, we find that recent capability gains have only yielded small improvements in reliability. By exposing these persistent limitations, our metrics complement traditional evaluations while offering tools for reasoning about how agents perform, degrade, and fail.

  8. Multi-agent cooperation through in-context co-player inference

    Achieving cooperation among self-interested agents remains a fundamental challenge in multi-agent reinforcement learning. Recent work showed that mutual cooperation can be induced between "learning-aware" agents that account for and shape the learning dynamics of their co-players. However, existing approaches typically rely on hardcoded, often inconsistent, assumptions about co-player learning rules or enforce a strict separation between "naive learners" updating on fast timescales and "meta-learners" observing these updates. Here, we demonstrate that the in-context learning capabilities of sequence models allow for co-player learning awareness without requiring hardcoded assumptions or explicit timescale separation. We show that training sequence model agents against a diverse distribution of co-players naturally induces in-context best-response strategies, effectively functioning as learning algorithms on the fast intra-episode timescale. We find that the cooperative mechanism identified in prior work-where vulnerability to extortion drives mutual shaping-emerges naturally in this setting: in-context adaptation renders agents vulnerable to extortion, and the resulting mutual pressure to shape the opponent's in-context learning dynamics resolves into the learning of cooperative behavior. Our results suggest that standard decentralized reinforcement learning on sequence models combined with co-player diversity provides a scalable path to learning cooperative behaviors.

  9. World Action Models are Zero-shot Policies

    State-of-the-art Vision-Language-Action (VLA) models excel at semantic generalization but struggle to generalize to unseen physical motions in novel environments. We introduce DreamZero, a World Action Model (WAM) built upon a pretrained video diffusion backbone. Unlike VLAs, WAMs learn physical dynamics by predicting future world states and actions, using video as a dense representation of how the world evolves. By jointly modeling video and action, DreamZero learns diverse skills effectively from heterogeneous robot data without relying on repetitive demonstrations. This results in over 2x improvement in generalization to new tasks and environments compared to state-of-the-art VLAs in real robot experiments. Crucially, through model and system optimizations, we enable a 14B autoregressive video diffusion model to perform real-time closed-loop control at 7Hz. Finally, we demonstrate two forms of cross-embodiment transfer: video-only demonstrations from other robots or humans yield a relative improvement of over 42% on unseen task performance with just 10-20 minutes of data. More surprisingly, DreamZero enables few-shot embodiment adaptation, transferring to a new embodiment with only 30 minutes of play data while retaining zero-shot generalization.

  10. Reinforced Fast Weights with Next-Sequence Prediction

    Fast weight architectures offer a promising alternative to attention-based transformers for long-context modeling by maintaining constant memory overhead regardless of context length. However, their potential is limited by the next-token prediction (NTP) training paradigm. NTP optimizes single-token predictions and ignores semantic coherence across multiple tokens following a prefix. Consequently, fast weight models, which dynamically update their parameters to store contextual information, learn suboptimal representations that fail to capture long-range dependencies. We introduce REFINE (Reinforced Fast weIghts with Next sEquence prediction), a reinforcement learning framework that trains fast weight models under the next-sequence prediction (NSP) objective. REFINE selects informative token positions based on prediction entropy, generates multi-token rollouts, assigns self-supervised sequence-level rewards, and optimizes the model with group relative policy optimization (GRPO). REFINE is applicable throughout the training lifecycle of pre-trained language models: mid-training, post-training, and test-time training. Our experiments on LaCT-760M and DeltaNet-1.3B demonstrate that REFINE consistently outperforms supervised fine-tuning with NTP across needle-in-a-haystack retrieval, long-context question answering, and diverse tasks in LongBench. REFINE provides an effective and versatile framework for improving long-context modeling in fast weight architectures.

  11. SAM 3D Body: Robust Full-Body Human Mesh Recovery

    We introduce SAM 3D Body (3DB), a promptable model for single-image full-body 3D human mesh recovery (HMR) that demonstrates state-of-the-art performance, with strong generalization and consistent accuracy in diverse in-the-wild conditions. 3DB estimates the human pose of the body, feet, and hands. It is the first model to use a new parametric mesh representation, Momentum Human Rig (MHR), which decouples skeletal structure and surface shape. 3DB employs an encoder-decoder architecture and supports auxiliary prompts, including 2D keypoints and masks, enabling user-guided inference similar to the SAM family of models. We derive high-quality annotations from a multi-stage annotation pipeline that uses various combinations of manual keypoint annotation, differentiable optimization, multi-view geometry, and dense keypoint detection. Our data engine efficiently selects and processes data to ensure data diversity, collecting unusual poses and rare imaging conditions. We present a new evaluation dataset organized by pose and appearance categories, enabling nuanced analysis of model behavior. Our experiments demonstrate superior generalization and substantial improvements over prior methods in both qualitative user preference studies and traditional quantitative analysis. Both 3DB and MHR are open-source.

  12. Learning Personalized Agents from Human Feedback

    Modern AI agents are powerful but often fail to align with the idiosyncratic, evolving preferences of individual users. Prior approaches typically rely on static datasets, either training implicit preference models on interaction history or encoding user profiles in external memory. However, these approaches struggle with new users and with preferences that change over time. We introduce Personalized Agents from Human Feedback (PAHF), a framework for continual personalization in which agents learn online from live interaction using explicit per-user memory. PAHF operationalizes a three-step loop: (1) seeking pre-action clarification to resolve ambiguity, (2) grounding actions in preferences retrieved from memory, and (3) integrating post-action feedback to update memory when preferences drift. To evaluate this capability, we develop a four-phase protocol and two benchmarks in embodied manipulation and online shopping. These benchmarks quantify an agent's ability to learn initial preferences from scratch and subsequently adapt to persona shifts. Our theoretical analysis and empirical results show that integrating explicit memory with dual feedback channels is critical: PAHF learns substantially faster and consistently outperforms both no-memory and single-channel baselines, reducing initial personalization error and enabling rapid adaptation to preference shifts.

  13. MMA: Multimodal Memory Agent

    Long-horizon multimodal agents depend on external memory; however, similarity-based retrieval often surfaces stale, low-credibility, or conflicting items, which can trigger overconfident errors. We propose Multimodal Memory Agent (MMA), which assigns each retrieved memory item a dynamic reliability score by combining source credibility, temporal decay, and conflict-aware network consensus, and uses this signal to reweight evidence and abstain when support is insufficient. We also introduce MMA-Bench, a programmatically generated benchmark for belief dynamics with controlled speaker reliability and structured text-vision contradictions. Using this framework, we uncover the "Visual Placebo Effect", revealing how RAG-based agents inherit latent visual biases from foundation models. On FEVER, MMA matches baseline accuracy while reducing variance by 35.2% and improving selective utility; on LoCoMo, a safety-oriented configuration improves actionable accuracy and reduces wrong answers; on MMA-Bench, MMA reaches 41.18% Type-B accuracy in Vision mode, while the baseline collapses to 0.0% under the same protocol. Code: https://github.com/AIGeeksGroup/MMA.

  14. Learning Situated Awareness in the Real World

    A core aspect of human perception is situated awareness, the ability to relate ourselves to the surrounding physical environment and reason over possible actions in context. However, most existing benchmarks for multimodal foundation models (MFMs) emphasize environment-centric spatial relations (relations among objects in a scene), while largely overlooking observer-centric relationships that require reasoning relative to agent's viewpoint, pose, and motion. To bridge this gap, we introduce SAW-Bench (Situated Awareness in the Real World), a novel benchmark for evaluating egocentric situated awareness using real-world videos. SAW-Bench comprises 786 self-recorded videos captured with Ray-Ban Meta (Gen 2) smart glasses spanning diverse indoor and outdoor environments, and over 2,071 human-annotated question-answer pairs. It probes a model's observer-centric understanding with six different awareness tasks. Our comprehensive evaluation reveals a human-model performance gap of 37.66%, even with the best-performing MFM, Gemini 3 Flash. Beyond this gap, our in-depth analysis uncovers several notable findings; for example, while models can exploit partial geometric cues in egocentric videos, they often fail to infer a coherent camera geometry, leading to systematic spatial reasoning errors. We position SAW-Bench as a benchmark for situated spatial intelligence, moving beyond passive observation to understanding physically grounded, observer-centric dynamics.

  15. Optimizing Few-Step Generation with Adaptive Matching Distillation

    Distribution Matching Distillation (DMD) is a powerful acceleration paradigm, yet its stability is often compromised in Forbidden Zone, regions where the real teacher provides unreliable guidance while the fake teacher exerts insufficient repulsive force. In this work, we propose a unified optimization framework that reinterprets prior art as implicit strategies to avoid these corrupted regions. Based on this insight, we introduce Adaptive Matching Distillation (AMD), a self-correcting mechanism that utilizes reward proxies to explicitly detect and escape Forbidden Zones. AMD dynamically prioritizes corrective gradients via structural signal decomposition and introduces Repulsive Landscape Sharpening to enforce steep energy barriers against failure mode collapse. Extensive experiments across image and video generation tasks (e.g., SDXL, Wan2.1) and rigorous benchmarks (e.g., VBench, GenEval) demonstrate that AMD significantly enhances sample fidelity and training robustness. For instance, AMD improves the HPSv2 score on SDXL from 30.64 to 31.25, outperforming state-of-the-art baselines. These findings validate that explicitly rectifying optimization trajectories within Forbidden Zones is essential for pushing the performance ceiling of few-step generative models.

Solidot(9)

  1. 99% 的 40 岁以上成年人存在一处肩袖异常

    根据发表在《JAMA Internal Medicine》期刊上的一项研究,MRI 成像显示 99% 的 40 岁以上成年人存在一处肩袖异常,研究人员认为如此高比例的人存在相似情况,那么这种情况不应该视为异常而应该视为无需治疗的正常情况。肩袖是稳定肩膀且允许肩关节广泛活动的一群肌肉及其肌腱。有 602 名年龄在 41-76 岁的参与者完成了研究,82% 的人报告没有肩部症状,18% 报告有症状。MRI 成像显示 595 人(99%)至少存在一处肩袖异常,最常见的异常是部分撕裂(62%),其次是肌腱病(25%)和全厚度撕裂(11%),男女比例类似,其中全厚度撕裂 45 岁以下参与者没有发现,70-76 岁人群比例最高。

  2. Google 的新手机 Pixel 10a 基本上就是去年的 Pixel 9a

    Google 推出了新中端手机 Pixel 10a,其配置和 500 美元价格与去年推出的 Pixel 9a 基本相同,最显著的变化是摄像头不再凸起了,手机可以在桌子上滑来滑去了。Pixel 10a 显示屏分辨率和 Pixel 9a 相同,保护玻璃和最高亮度略有提升,处理器仍然是 Tensor G4 而不是其它 Pixel 10 系列采用的新一代 Tensor G5 SoC,摄像头硬件、内存、存储都和 Pixel 9a 一样,电池续航略有提升,Pixelsnap Qi2 无线充电和 Gemini AI 高级功能都没有提供。

  3. 加蓬屏蔽所有社交媒体

    加蓬媒体监管机构 High Authority for Communication(HAC)以传播“虚假信息传播”、“网络欺凌”和“未经授权披露个人数据”等理由宣布屏蔽所有社交媒体至另行通知为止,称网络内容加剧了冲突,加深了分裂。加蓬的 Brice Oligui Nguema 将军在 2023 年发动了军事政变,去年赢得了总统大选,他正面临日益严重的社会动荡,教师等公务员因薪酬和工作条件而举行罢工。Netblocks 的监测显示,WhatsApp、Facebook 和 TikTok 的主要社交媒体都已被屏蔽。

  4. 14 岁少年的折纸结构能承受自身万倍的重量

    14 岁 Miles Wu 设计的一种三浦折叠法(Miura-Ori)折纸结构能承受自身近万倍的重量,为他赢得了 Thermo Fisher 少年科学发明比赛(Thermo Fisher Scientific Junior Innovators Challenge)的 2.5 万美元大奖。他目前是纽约 Hunter College 中学的一名 9 年级学生。Wu 一直着迷于日本折纸艺术,他从 6 年前开始将其作为一种业余爱好,2024 年他开始探索折纸在艺术之外的其它用途。他尤其对三浦折叠法感兴趣,该折叠技术由日本学者三浦公亮(Koryo Miura)发明,只需要拉作品的对角两端,可立即将折纸展开或收缩,具有节约空间、应力分散、不需反覆折叠的优点。它结合了折纸艺术与工程学,可应用于太阳能板展开、地图折叠、建筑结构等领域。三浦折叠法已被应用于飞船和卫星的太阳能板展开。他在研究三浦折叠法时恰逢飓风 Helene 登陆佛罗里达州和南加州山火肆虐,因此思考利用三浦折叠法制造牢固、廉价和易于搭建的应急帐篷,现有的应急帐篷难以三全其美。他测试了 54 种三浦折叠法结构,其中最坚固的结构能承受超过自身重量万倍的重量。

  5. 英国在13 年内关闭了 1.4 万家酒吧

    根据数据分析师 Lauren Leek 的分析,英国自 2009 年以来关闭了逾 1.4 万家酒吧,酒吧注册量从 2009 年的 5.4 万家减少到 2022 年的低于 4 万家。伦敦地区的酒吧数量降幅最小。Leek 的分析还发现酒吧需要聚在一起才能避免关闭的命运,酒吧与相邻最近酒吧之间的距离太遥远的话会很容易倒闭。分析显示,幸存酒吧之间的距离中位数约 280米,倒闭酒吧距离中位数约 640 米。英国有很大部分酒吧被私募股权公司控制,英国最大的酒吧公司 Stonegate 控制着该国 1/11 的酒吧,而它隶属于私募 TDR Capital,它通过巨额债务收购其它酒吧发展壮大,但擅长收购并不意味着擅长经营,它因为杠杆收购而背负逾 40 亿美元的债务。英国目前有四分之一到三分之一的酒吧被私募和海外公司控制。

  6. 虚假医疗信息的主要受众是老年人

    犹他大学的研究人员跟踪了逾千名美国成年人四周的上网冲浪活动,发现虚假医疗信息的主要受众是老年人,尤其是政治立场偏右的老年人。研究期间参与者访问了约 900 万个网址,包括 50 万个 YouTube 视频。有 1,055 个域名属于医疗健康类别,其中 78 个域名被认为传播虚假医疗健康信息。只有 13% 的参与者访问过此类网站,而大部分访问量集中老年人群中。研究人员表示,他们的数据无法判断参与者是通过 Google 搜索还是 Facebook 推荐访问此类网站的。

  7. 希捷和西部数据证实其 2026 年硬盘产能已售罄

    三大硬盘制造商中的两家希捷和西部数据都已经证实其 2026 年硬盘产能已全部或几乎售罄,另一家硬盘制造商东芝的情况可能类似。西部数据 CEO 陈添耀表示,该公司与五大客户中的两家达成的供货协议持续到 2027 年,还有一家持续到 2028 年。希捷 CEO William Mosley 表示未来几个月将开始接受 2027 年上半年的订单。希捷和西部数据的大客户都是数据中心运营商,包括亚马逊 AWS、Google、微软 Azure、Meta 和 OpenAI。服务器硬盘占到了希捷硬盘总销量的 87%,而一年前是 83%。希捷表示它暂时没有扩大产能的计划。

  8. 内存价格飙升推动二手笔记本销量上涨

    内存和硬盘的价格因 AI 公司大规模采购而供不应求,价格在数个月内飙升数倍之多,内存等关键零部件的短缺推动了二手翻新笔记本电脑销量的上涨。根据 Context 的数据,意大利、英国、德国、西班牙和法国五大欧洲市场去年四季度二手翻新笔记本销量上升了 7%。四成的销量来自于预算有限的客户,他们购买的笔记本电脑价格区间在 235-355 美元之间。355-475 美元价格区间的二手电脑销量也在扩大,占到了整个二手电脑销售的 23%,而一年前是 15%,这表明部分客户愿意为更好的配置支付更高的价格。

  9. 切尔诺贝利工人后代的 DNA 突变

    研究人员测序了 130 人的基因序列,他们的父亲参与了切尔诺贝利核事故的清理工作。通过对比对照组,研究人员首次发现了父亲长时间暴露于低剂量电离辐射的“跨代效应”证据。研究报告发表在《Scientific Reports》期刊上。研究人员不是去寻找新的基因突变,而是寻找“簇状新生突变(clustered de novo mutations,缩写 cDNM)”——即在父母一代中不存在,但在后代中首次出现的两个或多个位置相近的突变。这些突变是暴露在辐射下导致父母 DNA 断裂而产生的。研究人员发现父亲暴露在辐射下导致后代 cDNM 数量显著增加,cDNM 数量与暴露的辐射剂量相关。cDNM 数量增加并没有增加后代的患病风险,原因可能是 cDNM 多数位于非编码的 DNA 区域。