WEEK · 2026-W08

Weekly Digest — 2026-W08

127 unique stories (2026-02-162026-02-22), aggregated across 8 sources.

Hacker News(42)

  1. Study: Self-generated Agent Skills are useless (arxiv.org)
  2. 14-year-old Miles Wu folded origami pattern that holds 10k times its own weight (www.smithsonianmag.com)
  3. Show HN: Jemini – Gemini for the Epstein Files (jmail.world)
  4. Use protocols, not services (notnotp.com)
  5. Privilege is bad grammar (tadaima.bearblog.dev)
  6. I guess I kinda get why people hate AI (anthony.noided.media)
  7. Thank HN: You helped save 33k lives
  8. Stephen Colbert says CBS forbid interview of Democrat because of FCC threat (arstechnica.com)
  9. Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway (asteroidos.org)
  10. Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans (electrek.co)
  11. Discord Rival Gets Overwhelmed by Exodus of Players Fleeing Age-Verification (kotaku.com)
  12. Claude Sonnet 4.6 (www.anthropic.com)

GitHub Trending(27)

  1. alibaba / zvec

    A lightweight, lightning-fast, in-process vector database

  2. nautechsystems / nautilus_trader

    A high-performance algorithmic trading platform and event-driven backtester

  3. rowboatlabs / rowboat

    Open-source AI coworker, with memory

  4. steipete / gogcli

    Google Suite CLI: Gmail, GCal, GDrive, GContacts.

  5. openclaw / openclaw

    Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

  6. SynkraAI / aios-core

    Synkra AIOS: AI-Orchestrated System for Full Stack Development - Core Framework v4.0

  7. p-e-w / heretic

    Fully automatic censorship removal for language models

  8. seerr-team / seerr

    Open-source media request and discovery manager for Jellyfin, Plex, and Emby.

  9. obra / superpowers

    An agentic skills framework & software development methodology that works.

  10. OpenCTI-Platform / opencti

    Open Cyber Threat Intelligence Platform

  11. QwenLM / qwen-code

    An open-source AI agent that lives in your terminal.

  12. NirDiamant / RAG_Techniques

    This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG) systems. RAG systems combine information retrieval with generative models to provide accurate and contextually rich responses.

Hugging Face(31)

  1. Less is Enough: Synthesizing Diverse Data in Feature Space of LLMs

    The diversity of post-training data is critical for effective downstream performance in large language models (LLMs). Many existing approaches to constructing post-training data quantify diversity using text-based metrics that capture linguistic variation, but such metrics provide only weak signals for the task-relevant features that determine downstream performance. In this work, we introduce Feature Activation Coverage (FAC) which measures data diversity in an interpretable feature space. Building upon this metric, we further propose a diversity-driven data synthesis framework, named FAC Synthesis, that first uses a sparse autoencoder to identify missing features from a seed dataset, and then generates synthetic samples that explicitly reflect these features. Experiments show that our approach consistently improves both data diversity and downstream performance on various tasks, including instruction following, toxicity detection, reward modeling, and behavior steering. Interestingly, we identify a shared, interpretable feature space across model families (i.e., LLaMA, Mistral, and Qwen), enabling cross-model knowledge transfer. Our work provides a solid and practical methodology for exploring data-centric optimization of LLMs.

  2. SQuTR: A Robustness Benchmark for Spoken Query to Text Retrieval under Acoustic Noise

    Spoken query retrieval is an important interaction mode in modern information retrieval. However, existing evaluation datasets are often limited to simple queries under constrained noise conditions, making them inadequate for assessing the robustness of spoken query retrieval systems under complex acoustic perturbations. To address this limitation, we present SQuTR, a robustness benchmark for spoken query retrieval that includes a large-scale dataset and a unified evaluation protocol. SQuTR aggregates 37,317 unique queries from six commonly used English and Chinese text retrieval datasets, spanning multiple domains and diverse query types. We synthesize speech using voice profiles from 200 real speakers and mix 17 categories of real-world environmental noise under controlled SNR levels, enabling reproducible robustness evaluation from quiet to highly noisy conditions. Under the unified protocol, we conduct large-scale evaluations on representative cascaded and end-to-end retrieval systems. Experimental results show that retrieval performance decreases as noise increases, with substantially different drops across systems. Even large-scale retrieval models struggle under extreme noise, indicating that robustness remains a critical bottleneck. Overall, SQuTR provides a reproducible testbed for benchmarking and diagnostic analysis, and facilitates future research on robustness in spoken query to text retrieval.

  3. MedXIAOHE: A Comprehensive Recipe for Building Medical MLLMs

    We present MedXIAOHE, a medical vision-language foundation model designed to advance general-purpose medical understanding and reasoning in real-world clinical applications. MedXIAOHE achieves state-of-the-art performance across diverse medical benchmarks and surpasses leading closed-source multimodal systems on multiple capabilities. To achieve this, we propose an entity-aware continual pretraining framework that organizes heterogeneous medical corpora to broaden knowledge coverage and reduce long-tail gaps (e.g., rare diseases). For medical expert-level reasoning and interaction, MedXIAOHE incorporates diverse medical reasoning patterns via reinforcement learning and tool-augmented agentic training, enabling multi-step diagnostic reasoning with verifiable decision traces. To improve reliability in real-world use, MedXIAOHE integrates user-preference rubrics, evidence-grounded reasoning, and low-hallucination long-form report generation, with improved adherence to medical instructions. We release this report to document our practical design choices, scaling insights, and evaluation framework, hoping to inspire further research.

  4. Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception

    Multimodal Large Language Models (MLLMs) excel at broad visual understanding but still struggle with fine-grained perception, where decisive evidence is small and easily overwhelmed by global context. Recent "Thinking-with-Images" methods alleviate this by iteratively zooming in and out regions of interest during inference, but incur high latency due to repeated tool calls and visual re-encoding. To address this, we propose Region-to-Image Distillation, which transforms zooming from an inference-time tool into a training-time primitive, thereby internalizing the benefits of agentic zooming into a single forward pass of an MLLM. In particular, we first zoom in to micro-cropped regions to let strong teacher models generate high-quality VQA data, and then distill this region-grounded supervision back to the full image. After training on such data, the smaller student model improves "single-glance" fine-grained perception without tool use. To rigorously evaluate this capability, we further present ZoomBench, a hybrid-annotated benchmark of 845 VQA data spanning six fine-grained perceptual dimensions, together with a dual-view protocol that quantifies the global--regional "zooming gap". Experiments show that our models achieve leading performance across multiple fine-grained perception benchmarks, and also improve general multimodal cognition on benchmarks such as visual reasoning and GUI agents. We further discuss when "Thinking-with-Images" is necessary versus when its gains can be distilled into a single forward pass. Our code is available at https://github.com/inclusionAI/Zooming-without-Zooming.

  5. OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence

    Hypothesis. Artificial general intelligence is, at its core, a compression problem. Effective compression demands resonance: deep learning scales best when its architecture aligns with the fundamental structure of the data. These are the fundamental principles. Yet, modern vision architectures have strayed from these truths: visual signals are highly redundant, while discriminative information, the surprise, is sparse. Current models process dense pixel grids uniformly, wasting vast compute on static background rather than focusing on the predictive residuals that define motion and meaning. We argue that to solve visual understanding, we must align our architectures with the information-theoretic principles of video, i.e., Codecs. Method. OneVision-Encoder encodes video by compressing predictive visual structure into semantic meaning. By adopting Codec Patchification, OV-Encoder abandons uniform computation to focus exclusively on the 3.1%-25% of regions rich in signal entropy. To unify spatial and temporal reasoning under irregular token layouts, OneVision-Encoder employs a shared 3D RoPE and is trained with a large-scale cluster discrimination objective over more than one million semantic concepts, jointly capturing object permanence and motion dynamics. Evidence. The results validate our core hypothesis: efficiency and accuracy are not a trade-off; they are positively correlated. When integrated into LLM, it consistently outperforms strong vision backbones such as Qwen3-ViT and SigLIP2 across 16 image, video, and document understanding benchmarks, despite using substantially fewer visual tokens and pretraining data. Notably, on video understanding tasks, OV-Encoder achieves an average improvement of 4.1% over Qwen3-ViT. Codec-aligned, patch-level sparsity is a foundational principle, enabling OV-Encoder as a scalable engine for next-generation visual generalists.

  6. CoPE-VideoLM: Codec Primitives For Efficient Video Language Models

    Video Language Models (VideoLMs) empower AI systems to understand temporal dynamics in videos. To fit to the maximum context window constraint, current methods use keyframe sampling which can miss both macro-level events and micro-level details due to the sparse temporal coverage. Furthermore, processing full images and their tokens for each frame incurs substantial computational overhead. To address these limitations, we propose to leverage video codec primitives (specifically motion vectors and residuals) which natively encode video redundancy and sparsity without requiring expensive full-image encoding for most frames. To this end, we introduce lightweight transformer-based encoders that aggregate codec primitives and align their representations with image encoder embeddings through a pre-training strategy that accelerates convergence during end-to-end fine-tuning. Our approach reduces the time-to-first-token by up to 86% and token usage by up to 93% compared to standard VideoLMs. Moreover, by varying the keyframe and codec primitive densities we are able to maintain or exceed performance on 14 diverse video understanding benchmarks spanning general question answering, temporal reasoning, long-form understanding, and spatial scene understanding.

  7. DeepImageSearch: Benchmarking Multimodal Agents for Context-Aware Image Retrieval in Visual Histories

    Existing multimodal retrieval systems excel at semantic matching but implicitly assume that query-image relevance can be measured in isolation. This paradigm overlooks the rich dependencies inherent in realistic visual streams, where information is distributed across temporal sequences rather than confined to single snapshots. To bridge this gap, we introduce DeepImageSearch, a novel agentic paradigm that reformulates image retrieval as an autonomous exploration task. Models must plan and perform multi-step reasoning over raw visual histories to locate targets based on implicit contextual cues. We construct DISBench, a challenging benchmark built on interconnected visual data. To address the scalability challenge of creating context-dependent queries, we propose a human-model collaborative pipeline that employs vision-language models to mine latent spatiotemporal associations, effectively offloading intensive context discovery before human verification. Furthermore, we build a robust baseline using a modular agent framework equipped with fine-grained tools and a dual-memory system for long-horizon navigation. Extensive experiments demonstrate that DISBench poses significant challenges to state-of-the-art models, highlighting the necessity of incorporating agentic reasoning into next-generation retrieval systems.

  8. Experiential Reinforcement Learning

    Reinforcement learning has become the central approach for language models (LMs) to learn from environmental reward or feedback. In practice, the environmental feedback is usually sparse and delayed. Learning from such signals is challenging, as LMs must implicitly infer how observed failures should translate into behavioral changes for future iterations. We introduce Experiential Reinforcement Learning (ERL), a training paradigm that embeds an explicit experience-reflection-consolidation loop into the reinforcement learning process. Given a task, the model generates an initial attempt, receives environmental feedback, and produces a reflection that guides a refined second attempt, whose success is reinforced and internalized into the base policy. This process converts feedback into structured behavioral revision, improving exploration and stabilizing optimization while preserving gains at deployment without additional inference cost. Across sparse-reward control environments and agentic reasoning benchmarks, ERL consistently improves learning efficiency and final performance over strong reinforcement learning baselines, achieving gains of up to +81% in complex multi-step environments and up to +11% in tool-using reasoning tasks. These results suggest that integrating explicit self-reflection into policy training provides a practical mechanism for transforming feedback into durable behavioral improvement.

  9. REDSearcher: A Scalable and Cost-Efficient Framework for Long-Horizon Search Agents

    Large language models are transitioning from generalpurpose knowledge engines to realworld problem solvers, yet optimizing them for deep search tasks remains challenging. The central bottleneck lies in the extreme sparsity of highquality search trajectories and reward signals, arising from the difficulty of scalable longhorizon task construction and the high cost of interactionheavy rollouts involving external tool calls. To address these challenges, we propose REDSearcher, a unified framework that codesigns complex task synthesis, midtraining, and posttraining for scalable searchagent optimization. Specifically, REDSearcher introduces the following improvements: (1) We frame task synthesis as a dualconstrained optimization, where task difficulty is precisely governed by graph topology and evidence dispersion, allowing scalable generation of complex, highquality tasks. (2) We introduce toolaugmented queries to encourage proactive tool use rather than passive recall.(3) During midtraining, we strengthen core atomic capabilities knowledge, planning, and function calling substantially reducing the cost of collecting highquality trajectories for downstream training. (4) We build a local simulated environment that enables rapid, lowcost algorithmic iteration for reinforcement learning experiments. Across both textonly and multimodal searchagent benchmarks, our approach achieves stateoftheart performance. To facilitate future research on longhorizon search agents, we will release 10K highquality complex text search trajectories, 5K multimodal trajectories and 1K text RL query set, and together with code and model checkpoints.

  10. STATe-of-Thoughts: Structured Action Templates for Tree-of-Thoughts

    Inference-Time-Compute (ITC) methods like Best-of-N and Tree-of-Thoughts are meant to produce output candidates that are both high-quality and diverse, but their use of high-temperature sampling often fails to achieve meaningful output diversity. Moreover, existing ITC methods offer limited control over how to perform reasoning, which in turn limits their explainability. We present STATe-of-Thoughts (STATe), an interpretable ITC method that searches over high-level reasoning patterns. STATe replaces stochastic sampling with discrete and interpretable textual interventions: a controller selects actions encoding high-level reasoning choices, a generator produces reasoning steps conditioned on those choices, and an evaluator scores candidates to guide search. This structured approach yields three main advantages. First, action-guided textual interventions produce greater response diversity than temperature-based sampling. Second, in a case study on argument generation, STATe's explicit action sequences capture interpretable features that are highly predictive of output quality. Third, estimating the association between performance and action choices allows us to identify promising yet unexplored regions of the action space and steer generation directly toward them. Together, these results establish STATe as a practical framework for generating high-quality, diverse, and interpretable text. Our framework is available at https://github.com/zbambergerNLP/state-of-thoughts.

  11. Query as Anchor: Scenario-Adaptive User Representation via Large Language Model

    Industrial-scale user representation learning requires balancing robust universality with acute task-sensitivity. However, existing paradigms primarily yield static, task-agnostic embeddings that struggle to reconcile the divergent requirements of downstream scenarios within unified vector spaces. Furthermore, heterogeneous multi-source data introduces inherent noise and modality conflicts, degrading representation. We propose Query-as-Anchor, a framework shifting user modeling from static encoding to dynamic, query-aware synthesis. To empower Large Language Models (LLMs) with deep user understanding, we first construct UserU, an industrial-scale pre-training dataset that aligns multi-modal behavioral sequences with user understanding semantics, and our Q-Anchor Embedding architecture integrates hierarchical coarse-to-fine encoders into dual-tower LLMs via joint contrastive-autoregressive optimization for query-aware user representation. To bridge the gap between general pre-training and specialized business logic, we further introduce Cluster-based Soft Prompt Tuning to enforce discriminative latent structures, effectively aligning model attention with scenario-specific modalities. For deployment, anchoring queries at sequence termini enables KV-cache-accelerated inference with negligible incremental latency. Evaluations on 10 Alipay industrial benchmarks show consistent SOTA performance, strong scalability, and efficient deployment. Large-scale online A/B testing in Alipay's production system across two real-world scenarios further validates its practical effectiveness. Our code is prepared for public release and will be available at: https://github.com/JhCircle/Q-Anchor.

  12. Data Darwinism Part I: Unlocking the Value of Scientific Data for Pre-training

    Data quality determines foundation model performance, yet systematic processing frameworks are lacking. We introduce Data Darwinism, a ten-level taxonomy (L0-L9) that conceptualizes data-model co-evolution: advanced models produce superior data for next-generation systems. We validate this on scientific literature by constructing Darwin-Science, a 900B-token corpus (L0-L5). We identify a learnability gap in raw scientific text, which we bridge via L4 (Generative Refinement) and L5 (Cognitive Completion) using frontier LLMs to explicate reasoning and terminology. To ensure rigorous attribution, we pre-trained daVinci-origin-3B/7B models from scratch, excluding scientific content to create contamination-free baselines. After 600B tokens of continued pre-training, Darwin-Science outperforms baselines by +2.12 (3B) and +2.95 (7B) points across 20+ benchmarks, rising to +5.60 and +8.40 points on domain-aligned tasks. Systematic progression to L5 yields a +1.36 total gain, confirming that higher-level processing unlocks latent data value. We release the Darwin-Science corpus and daVinci-origin models to enable principled, co-evolutionary development.

Solidot(27)

  1. 巴比伦五号上传到 YouTube 可免费观看

    Warner Bros. Discovery 以每周一集的频率将著名科幻剧集《巴比伦五号(Babylon 5)》上传到 YouTube 供所有人免费观看。第一季第一集《The Gatherin》于 1 月 22 日上传,目前观看量 25 万,第二集《Midnight on the Firing Line》和第三集《Soul Hunter》也都已经发布,这一发布频率沿用了《巴比伦五号》最早播出时的时间表,此举旨在让观众以相同的节奏体验剧情。《巴比伦五号》于 1993 年 2 月 22 日首播,共制作了五季 110 集,故事发生在 2257-2262 年,地球各国、火星、以及比邻星的殖民地组成的“地球联盟”已和其他外星文明接触,并且取得超空间技术可以超光速航行。故事开始之前十年,地球差点在一场星际战争中被明巴利人(Minbari)歼灭,但明巴利人在胜利前夕突然投降。为了避免悲剧重演,双方建立了和平往来的管道,人类建造了巴比伦五号太空站用作和平外交和贸易。此时的巴比伦五号成为了政治阴谋、种族冲突和一场大战的焦点,而地球切断了与盟友的联系,正滑向法西斯主义。

  2. Ars Technica AI 记者为 AI 生成内容道歉

    知名科技媒体 Ars Technica 上周在报道 AI 新闻时被发现将 AI 生成的内容作为消息来源使用,Ars 联合创始人兼主编 Ken Fisher 周日发表声明公开道歉,称他们检查了最近发表的一系列文章,没有发现其它文章含有 AI 生成内容,目前看来这应该是一次孤立事件。这篇报道的合作者 Benj Edwards 是 Ars 的资深 AI 记者,他解释说尝试使用基于 Claude Code 的实验性 AI 工具从原始材料中提取出可添加到大纲的结构化引用内容,但该 AI 拒绝处理,他猜测可能是文章描述的是一起骚扰事件(AI 骚扰人类),他于是将文本拷贝到 ChatGPT,没有注意到 ChatGPT 生成了文章作者的意译版本而不是原话,在引用时没有核实引用是否与原文一致。AI 记者因 AI 幻觉犯错,这件事太有讽刺性了。

  3. OpenClaw 创始人加盟 OpenAI

    OpenClaw 开源项目的创始人 Peter Steinberger 宣布加盟 OpenAI,而 OpenClaw 将由基金会管理。OpenClaw 是一个开源的自主 AI 虚拟助理软件项目,最初于 2025 年末以 Clawdbot 的名字在 GitHub 上发布,后更名为 Moltbot,最终定为现名。2026 年初,该项目因能根据用户指令在应用和在线服务中自主处理复杂任务而受到关注。OpenClaw 可部署在 MacOS、Windows 等本地设备上,能调用其他 AI 大模型与 API,通过 WhatsApp、Telegram、Signal、Discord 等即时通讯平台接收用户发送的文本指令,实现安排日程、发送消息、整理文件、编写代码等工作。

  4. Vim 9.2 释出

    Vim 文本编辑器项目在情人节释出了 v9.2 版本。主要变化包括:实验性 Wayland 支持;XDG Base Directory Specification 目录标准支持——即将配置文件、缓存数据、用户数据等储存在不同目录;HiDPI 显示器的现代默认配置;新的代码补全功能;改进 diff 模式;新增垂直标签面板;Windows 版本有了原生深色模式支持,等等。

  5. 地球暖化加速的原因

    对 1880-2025 年全球平均地表温度的分析显示,过去 30 年全球气温上升在加速,过去 10 年达到了每十年上升近 0.27C。地球暖化加速的一种解释是气溶胶污染减少,气溶胶会反射太阳光,有降温效应,能抵消部分温室气体产生的暖化效应。过去二十年很多国家开始严打气溶胶污染,导致降温效应减少了。然而研究人员认为,过去几年的创纪录高温无法完全用气溶胶和自然变化进行解释。他们发现,地球低空云的覆盖面积下降了,低空云会反射阳光,其面积的减少推动了暖化的加速。低空云的减少部分与气溶胶有关,但也可能是气温上升导致的反馈循环。气温升高会让低层云更难形成。目前创纪录的高温如果主要是气溶胶变化造成的,那么一旦气溶胶污染物降至零,加速升温的趋势会停止,地球将恢复到之前较慢的升温。但如果是由于云层反馈循环造成的,那么升温加速趋势很可能会持续下去,会带来更严重的热浪、风暴和干旱。

  6. 在高危漏洞披露前电信公司提前屏蔽 Telnet 流量

    1 月 20 日公开的 Telnet 高危漏洞 CVE-2026-24061 存在于 GNU InetUtils telnetd 中,已有 10 年历史,CVSS 评分 9.8/10,非常容易被攻击者获取 root 权限。但在漏洞披露前一周,全球的 Telnet 流量就出现断崖式下降。电信运营商应该是提前收到了漏洞预警,提前采取行动防止漏洞利用。数据显示,1 月 14 日 Telnet 会话数在一小时内下降了 65%,两小时内下降了 83%。日均会话数从 12 月 1 日的 91.4 万次降至 1 月 14 日的约 37.3 万次,降幅达 59%。北美一家或多家 Tier 1 级中转服务提供商过滤了 Telnet 协议默认使用的 23 端口。BT、Cox Communications 和 Vultr 在内的 18 家电信运营商的 Telnet 会话数在 1 月 15 日从之前的数十万降至零。

  7. 虚假医疗信息的主要受众是老年人

    犹他大学的研究人员跟踪了逾千名美国成年人四周的上网冲浪活动,发现虚假医疗信息的主要受众是老年人,尤其是政治立场偏右的老年人。研究期间参与者访问了约 900 万个网址,包括 50 万个 YouTube 视频。有 1,055 个域名属于医疗健康类别,其中 78 个域名被认为传播虚假医疗健康信息。只有 13% 的参与者访问过此类网站,而大部分访问量集中老年人群中。研究人员表示,他们的数据无法判断参与者是通过 Google 搜索还是 Facebook 推荐访问此类网站的。

  8. 希捷和西部数据证实其 2026 年硬盘产能已售罄

    三大硬盘制造商中的两家希捷和西部数据都已经证实其 2026 年硬盘产能已全部或几乎售罄,另一家硬盘制造商东芝的情况可能类似。西部数据 CEO 陈添耀表示,该公司与五大客户中的两家达成的供货协议持续到 2027 年,还有一家持续到 2028 年。希捷 CEO William Mosley 表示未来几个月将开始接受 2027 年上半年的订单。希捷和西部数据的大客户都是数据中心运营商,包括亚马逊 AWS、Google、微软 Azure、Meta 和 OpenAI。服务器硬盘占到了希捷硬盘总销量的 87%,而一年前是 83%。希捷表示它暂时没有扩大产能的计划。

  9. 内存价格飙升推动二手笔记本销量上涨

    内存和硬盘的价格因 AI 公司大规模采购而供不应求,价格在数个月内飙升数倍之多,内存等关键零部件的短缺推动了二手翻新笔记本电脑销量的上涨。根据 Context 的数据,意大利、英国、德国、西班牙和法国五大欧洲市场去年四季度二手翻新笔记本销量上升了 7%。四成的销量来自于预算有限的客户,他们购买的笔记本电脑价格区间在 235-355 美元之间。355-475 美元价格区间的二手电脑销量也在扩大,占到了整个二手电脑销售的 23%,而一年前是 15%,这表明部分客户愿意为更好的配置支付更高的价格。

  10. 切尔诺贝利工人后代的 DNA 突变

    研究人员测序了 130 人的基因序列,他们的父亲参与了切尔诺贝利核事故的清理工作。通过对比对照组,研究人员首次发现了父亲长时间暴露于低剂量电离辐射的“跨代效应”证据。研究报告发表在《Scientific Reports》期刊上。研究人员不是去寻找新的基因突变,而是寻找“簇状新生突变(clustered de novo mutations,缩写 cDNM)”——即在父母一代中不存在,但在后代中首次出现的两个或多个位置相近的突变。这些突变是暴露在辐射下导致父母 DNA 断裂而产生的。研究人员发现父亲暴露在辐射下导致后代 cDNM 数量显著增加,cDNM 数量与暴露的辐射剂量相关。cDNM 数量增加并没有增加后代的患病风险,原因可能是 cDNM 多数位于非编码的 DNA 区域。

  11. 99% 的 40 岁以上成年人存在一处肩袖异常

    根据发表在《JAMA Internal Medicine》期刊上的一项研究,MRI 成像显示 99% 的 40 岁以上成年人存在一处肩袖异常,研究人员认为如此高比例的人存在相似情况,那么这种情况不应该视为异常而应该视为无需治疗的正常情况。肩袖是稳定肩膀且允许肩关节广泛活动的一群肌肉及其肌腱。有 602 名年龄在 41-76 岁的参与者完成了研究,82% 的人报告没有肩部症状,18% 报告有症状。MRI 成像显示 595 人(99%)至少存在一处肩袖异常,最常见的异常是部分撕裂(62%),其次是肌腱病(25%)和全厚度撕裂(11%),男女比例类似,其中全厚度撕裂 45 岁以下参与者没有发现,70-76 岁人群比例最高。

  12. Google 的新手机 Pixel 10a 基本上就是去年的 Pixel 9a

    Google 推出了新中端手机 Pixel 10a,其配置和 500 美元价格与去年推出的 Pixel 9a 基本相同,最显著的变化是摄像头不再凸起了,手机可以在桌子上滑来滑去了。Pixel 10a 显示屏分辨率和 Pixel 9a 相同,保护玻璃和最高亮度略有提升,处理器仍然是 Tensor G4 而不是其它 Pixel 10 系列采用的新一代 Tensor G5 SoC,摄像头硬件、内存、存储都和 Pixel 9a 一样,电池续航略有提升,Pixelsnap Qi2 无线充电和 Gemini AI 高级功能都没有提供。