Weekly Digest — 2026-W04
134 unique stories (2026-01-19 → 2026-01-25), aggregated across 8 sources.
Hacker News(42)
- Nearly a third of social media research has undisclosed ties to industry (www.science.org)
- Letter from a Birmingham Jail (1963) (www.africa.upenn.edu)
- What came first: the CNAME or the A record? (blog.cloudflare.com)
- Apple testing new App Store design that blurs the line between ads and results (9to5mac.com)
- American importers and consumers bear the cost of 2025 tariffs: analysis (www.kielinstitut.de)
- GLM-4.7-Flash (huggingface.co)
- California is free of drought for the first time in 25 years (www.latimes.com)
- A 26,000-year astronomical monument hidden in plain sight (2019) (longnow.org)
- Meta's legal team abandoned its ethical duties (www.afterbabel.com)
- The Unix Pipe Card Game (punkx.org)
- Nvidia Stock Crash Prediction (entropicthoughts.com)
- De-dollarization: Is the US dollar losing its dominance? (2025) (www.jpmorgan.com)
GitHub Trending(25)
- OpenBMB / VoxCPM
VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning
- google / langextract
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.
- iOfficeAI / AionUi
Free, local, open-source Cowork for Gemini CLI, Claude Code, Codex, Opencode, Qwen Code, Goose Cli, Auggie, and more | 🌟 Star if you like it!
- czlonkowski / n8n-mcp
A MCP for Claude Desktop / Claude Code / Windsurf / Cursor to build n8n workflows for you
- nautechsystems / nautilus_trader
A high-performance algorithmic trading platform and event-driven backtester
- ahujasid / blender-mcp
- microsoft / agent-lightning
The absolute trainer to light up AI agents.
- AlexxIT / go2rtc
Ultimate camera streaming application with support RTSP, RTMP, HTTP-FLV, WebRTC, MSE, HLS, MP4, MJPEG, HomeKit, FFmpeg, etc.
- lukasz-madon / awesome-remote-job
A curated list of awesome remote jobs and resources. Inspired by https://github.com/vinta/awesome-python
- tobi / try
fresh directories for every vibe
- tambo-ai / tambo
Generative UI SDK for React
- EveryInc / compound-engineering-plugin
Official Claude Code compound engineering plugin
Hugging Face(31)
- Your Group-Relative Advantage Is Biased
Reinforcement Learning from Verifier Rewards (RLVR) has emerged as a widely used approach for post-training large language models on reasoning tasks, with group-based methods such as GRPO and its variants gaining broad adoption. These methods rely on group-relative advantage estimation to avoid learned critics, yet its theoretical properties remain poorly understood. In this work, we uncover a fundamental issue of group-based RL: the group-relative advantage estimator is inherently biased relative to the true (expected) advantage. We provide the first theoretical analysis showing that it systematically underestimates advantages for hard prompts and overestimates them for easy prompts, leading to imbalanced exploration and exploitation. To address this issue, we propose History-Aware Adaptive Difficulty Weighting (HA-DW), an adaptive reweighting scheme that adjusts advantage estimates based on an evolving difficulty anchor and training dynamics. Both theoretical analysis and experiments on five mathematical reasoning benchmarks demonstrate that HA-DW consistently improves performance when integrated into GRPO and its variants. Our results suggest that correcting biased advantage estimation is critical for robust and efficient RLVR training.
- The Poisoned Apple Effect: Strategic Manipulation of Mediated Markets via Technology Expansion of AI Agents
The integration of AI agents into economic markets fundamentally alters the landscape of strategic interaction. We investigate the economic implications of expanding the set of available technologies in three canonical game-theoretic settings: bargaining (resource division), negotiation (asymmetric information trade), and persuasion (strategic information transmission). We find that simply increasing the choice of AI delegates can drastically shift equilibrium payoffs and regulatory outcomes, often creating incentives for regulators to proactively develop and release technologies. Conversely, we identify a strategic phenomenon termed the "Poisoned Apple" effect: an agent may release a new technology, which neither they nor their opponent ultimately uses, solely to manipulate the regulator's choice of market design in their favor. This strategic release improves the releaser's welfare at the expense of their opponent and the regulator's fairness objectives. Our findings demonstrate that static regulatory frameworks are vulnerable to manipulation via technology expansion, necessitating dynamic market designs that adapt to the evolving landscape of AI capabilities.
- Unlocking Implicit Experience: Synthesizing Tool-Use Trajectories from Text
Enabling Large Language Models (LLMs) to effectively utilize tools in multi-turn interactions is essential for building capable autonomous agents. However, acquiring diverse and realistic multi-turn tool-use data remains a significant challenge. In this work, we propose a novel text-based paradigm. We observe that textual corpora naturally contain rich, multi-step problem-solving experiences, which can serve as an untapped, scalable, and authentic data source for multi-turn tool-use tasks. Based on this insight, we introduce GEM, a data synthesis pipeline that enables the generation and extraction of multi-turn tool-use trajectories from text corpora through a four-stage process: relevance filtering, workflow & tool extraction, trajectory grounding, and complexity refinement. To reduce the computational cost, we further train a specialized Trajectory Synthesizer via supervised fine-tuning. This model distills the complex generation pipeline into an efficient, end-to-end trajectory generator. Experiments demonstrate that our GEM-32B achieve a 16.5% improvement on the BFCL V3 Multi-turn benchmark. Our models partially surpass the performance of models trained on τ - bench (Airline and Retail) in-domain data, highlighting the superior generalization capability derived from our text-based synthesis paradigm. Notably, our Trajectory Synthesizer matches the quality of the full pipeline while significantly reducing inference latency and costs.
- RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation
Reinforcement Learning with Verifiable Rewards (RLVR) has driven substantial progress in reasoning-intensive domains like mathematics. However, optimizing open-ended generation remains challenging due to the lack of ground truth. While rubric-based evaluation offers a structured proxy for verification, existing methods suffer from scalability bottlenecks and coarse criteria, resulting in a supervision ceiling effect. To address this, we propose an automated Coarse-to-Fine Rubric Generation framework. By synergizing principle-guided synthesis, multi-model aggregation, and difficulty evolution, our approach produces comprehensive and highly discriminative criteria capable of capturing the subtle nuances. Based on this framework, we introduce RubricHub, a large-scale (sim110k) and multi-domain dataset. We validate its utility through a two-stage post-training pipeline comprising Rubric-based Rejection Sampling Fine-Tuning (RuFT) and Reinforcement Learning (RuRL). Experimental results demonstrate that RubricHub unlocks significant performance gains: our post-trained Qwen3-14B achieves state-of-the-art (SOTA) results on HealthBench (69.3), surpassing proprietary frontier models such as GPT-5. The code and data will be released soon.
- When Personalization Misleads: Understanding and Mitigating Hallucinations in Personalized LLMs
Personalized large language models (LLMs) adapt model behavior to individual users to enhance user satisfaction, yet personalization can inadvertently distort factual reasoning. We show that when personalized LLMs face factual queries, there exists a phenomenon where the model generates answers aligned with a user's prior history rather than the objective truth, resulting in personalization-induced hallucinations that degrade factual reliability and may propagate incorrect beliefs, due to representational entanglement between personalization and factual representations. To address this issue, we propose Factuality-Preserving Personalized Steering (FPPS), a lightweight inference-time approach that mitigates personalization-induced factual distortions while preserving personalized behavior. We further introduce PFQABench, the first benchmark designed to jointly evaluate factual and personalized question answering under personalization. Experiments across multiple LLM backbones and personalization methods show that FPPS substantially improves factual accuracy while maintaining personalized performance.
- ACoT-VLA: Action Chain-of-Thought for Vision-Language-Action Models
Vision-Language-Action (VLA) models have emerged as essential generalist robot policies for diverse manipulation tasks, conventionally relying on directly translating multimodal inputs into actions via Vision-Language Model (VLM) embeddings. Recent advancements have introduced explicit intermediary reasoning, such as sub-task prediction (language) or goal image synthesis (vision), to guide action generation. However, these intermediate reasoning are often indirect and inherently limited in their capacity to convey the full, granular information required for precise action execution. Instead, we posit that the most effective form of reasoning is one that deliberates directly in the action space. We introduce Action Chain-of-Thought (ACoT), a paradigm where the reasoning process itself is formulated as a structured sequence of coarse action intents that guide the final policy. In this paper, we propose ACoT-VLA, a novel architecture that materializes the ACoT paradigm. Specifically, we introduce two complementary components: an Explicit Action Reasoner (EAR) and Implicit Action Reasoner (IAR). The former proposes coarse reference trajectories as explicit action-level reasoning steps, while the latter extracts latent action priors from internal representations of multimodal input, co-forming an ACoT that conditions the downstream action head to enable grounded policy learning. Extensive experiments in real-world and simulation environments demonstrate the superiority of our proposed method, which achieves 98.5%, 84.1%, and 47.4% on LIBERO, LIBERO-Plus and VLABench, respectively.
- ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development
The evolution of Large Language Models (LLMs) into autonomous agents has expanded the scope of AI coding from localized code generation to complex, repository-level, and execution-driven problem solving. However, current benchmarks predominantly evaluate code logic in static contexts, neglecting the dynamic, full-process requirements of real-world engineering, particularly in backend development which demands rigorous environment configuration and service deployment. To address this gap, we introduce ABC-Bench, a benchmark explicitly designed to evaluate agentic backend coding within a realistic, executable workflow. Using a scalable automated pipeline, we curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories. Distinct from previous evaluations, ABC-Bench require the agents to manage the entire development lifecycle from repository exploration to instantiating containerized services and pass the external end-to-end API tests. Our extensive evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks, highlighting a substantial disparity between current model capabilities and the demands of practical backend engineering. Our code is available at https://github.com/OpenMOSS/ABC-Bench.
- Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge
Large language models often solve complex reasoning tasks more effectively with Chain-of-Thought (CoT), but at the cost of long, low-bandwidth token sequences. Humans, by contrast, often reason softly by maintaining a distribution over plausible next steps. Motivated by this, we propose Multiplex Thinking, a stochastic soft reasoning mechanism that, at each thinking step, samples K candidate tokens and aggregates their embeddings into a single continuous multiplex token. This preserves the vocabulary embedding prior and the sampling dynamics of standard discrete generation, while inducing a tractable probability distribution over multiplex rollouts. Consequently, multiplex trajectories can be directly optimized with on-policy reinforcement learning (RL). Importantly, Multiplex Thinking is self-adaptive: when the model is confident, the multiplex token is nearly discrete and behaves like standard CoT; when it is uncertain, it compactly represents multiple plausible next steps without increasing sequence length. Across challenging math reasoning benchmarks, Multiplex Thinking consistently outperforms strong discrete CoT and RL baselines from Pass@1 through Pass@1024, while producing shorter sequences. The code and checkpoints are available at https://github.com/GMLR-Penn/Multiplex-Thinking.
- Medical SAM3: A Foundation Model for Universal Prompt-Driven Medical Image Segmentation
Promptable segmentation foundation models such as SAM3 have demonstrated strong generalization capabilities through interactive and concept-based prompting. However, their direct applicability to medical image segmentation remains limited by severe domain shifts, the absence of privileged spatial prompts, and the need to reason over complex anatomical and volumetric structures. Here we present Medical SAM3, a foundation model for universal prompt-driven medical image segmentation, obtained by fully fine-tuning SAM3 on large-scale, heterogeneous 2D and 3D medical imaging datasets with paired segmentation masks and text prompts. Through a systematic analysis of vanilla SAM3, we observe that its performance degrades substantially on medical data, with its apparent competitiveness largely relying on strong geometric priors such as ground-truth-derived bounding boxes. These findings motivate full model adaptation beyond prompt engineering alone. By fine-tuning SAM3's model parameters on 33 datasets spanning 10 medical imaging modalities, Medical SAM3 acquires robust domain-specific representations while preserving prompt-driven flexibility. Extensive experiments across organs, imaging modalities, and dimensionalities demonstrate consistent and significant performance gains, particularly in challenging scenarios characterized by semantic ambiguity, complex morphology, and long-range 3D context. Our results establish Medical SAM3 as a universal, text-guided segmentation foundation model for medical imaging and highlight the importance of holistic model adaptation for achieving robust prompt-driven segmentation under severe domain shift. Code and model will be made available at https://github.com/AIM-Research-Lab/Medical-SAM3.
- NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems
Accurately assessing model confidence is essential for deploying large language models (LLMs) in mission-critical factual domains. While retrieval-augmented generation (RAG) is widely adopted to improve grounding, confidence calibration in RAG settings remains poorly understood. We conduct a systematic study across four benchmarks, revealing that LLMs exhibit poor calibration performance due to noisy retrieved contexts. Specifically, contradictory or irrelevant evidence tends to inflate the model's false certainty, leading to severe overconfidence. To address this, we propose NAACL Rules (Noise-AwAre Confidence CaLibration Rules) to provide a principled foundation for resolving overconfidence under noise. We further design NAACL, a noise-aware calibration framework that synthesizes supervision from about 2K HotpotQA examples guided by these rules. By performing supervised fine-tuning (SFT) with this data, NAACL equips models with intrinsic noise awareness without relying on stronger teacher models. Empirical results show that NAACL yields substantial gains, improving ECE scores by 10.9% in-domain and 8.0% out-of-domain. By bridging the gap between retrieval noise and verbal calibration, NAACL paves the way for both accurate and epistemically reliable LLMs.
- YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation
Steering Large Language Models (LLMs) through activation interventions has emerged as a lightweight alternative to fine-tuning for alignment and personalization. Recent work on Bi-directional Preference Optimization (BiPO) shows that dense steering vectors can be learned directly from preference data in a Direct Preference Optimization (DPO) fashion, enabling control over truthfulness, hallucinations, and safety behaviors. However, dense steering vectors often entangle multiple latent factors due to neuron multi-semanticity, limiting their effectiveness and stability in fine-grained settings such as cultural alignment, where closely related values and behaviors (e.g., among Middle Eastern cultures) must be distinguished. In this paper, we propose Yet another Policy Optimization (YaPO), a reference-free method that learns sparse steering vectors in the latent space of a Sparse Autoencoder (SAE). By optimizing sparse codes, YaPO produces disentangled, interpretable, and efficient steering directions. Empirically, we show that YaPO converges faster, achieves stronger performance, and exhibits improved training stability compared to dense steering baselines. Beyond cultural alignment, YaPO generalizes to a range of alignment-related behaviors, including hallucination, wealth-seeking, jailbreak, and power-seeking. Importantly, YaPO preserves general knowledge, with no measurable degradation on MMLU. Overall, our results show that YaPO provides a general recipe for efficient, stable, and fine-grained alignment of LLMs, with broad applications to controllability and domain adaptation. The associated code and data are publicly availablehttps://github.com/MBZUAI-Paris/YaPO.
- Spurious Rewards Paradox: Mechanistically Understanding How RLVR Activates Memorization Shortcuts in LLMs
Reinforcement Learning with Verifiable Rewards (RLVR) is highly effective for enhancing LLM reasoning, yet recent evidence shows models like Qwen 2.5 achieve significant gains even with spurious or incorrect rewards. We investigate this phenomenon and identify a "Perplexity Paradox": spurious RLVR triggers a divergence where answer-token perplexity drops while prompt-side coherence degrades, suggesting the model is bypassing reasoning in favor of memorization. Using Path Patching, Logit Lens, JSD analysis, and Neural Differential Equations, we uncover a hidden Anchor-Adapter circuit that facilitates this shortcut. We localize a Functional Anchor in the middle layers (L18-20) that triggers the retrieval of memorized solutions, followed by Structural Adapters in later layers (L21+) that transform representations to accommodate the shortcut signal. Finally, we demonstrate that scaling specific MLP keys within this circuit allows for bidirectional causal steering-artificially amplifying or suppressing contamination-driven performance. Our results provide a mechanistic roadmap for identifying and mitigating data contamination in RLVR-tuned models. Code is available at https://github.com/idwts/How-RLVR-Activates-Memorization-Shortcuts.
Solidot(36)
- 近三分之一的社媒研究有着未披露的利益关联
社媒研究员通常需要与平台进行合作,但双方的利益关系很多时候并没有公开。近三分之一的社媒研究有着未披露的利益关联,部分研究员接受了社媒的资助,部分曾与社媒行业的员工合作发表过研究。研究人员认为,此类关联可能会扭曲研究结果。研究人员分析了《科学》、《自然》、PNAS 及其子刊如《Science Advances》和《Nature Communications》上的 295 篇社媒论文,它们的总引用次数有 5 万次,被逾 1.5 万篇新闻报道引用。其中五分之一的署名作者承认获得社媒企业的资助或有过合作。研究人员之后使用 OpenAlex 分析了署名作者和社媒企业之间的关联,发现一半作者存在关联,也就是有 30% 的作者没有披露潜在利益冲突。
- 中国开源 AI 模型占全球份额的 15%
根据 AI 工具平台 OpenRouter 和风险投资公司 a16z 的分析,2025 年 11 月中国企业开发的生成式 AI 占到了全球市场份额约 15%,相比 1 年前的 1% 大幅提升。性能测试显示,去年 12 月发布的 DeepSeek 模型在 92 个模型中排在第 9 位。作为开源模型排名第 1,其次是阿里巴巴的 Qwen(千问),它们在性能方面超过了 Google 和 OpenAI 的开源模型。日本企业开发 AI 时也在使用中国的 DeepSeek 和 Qwen。
- 数据中心将在 2026 年消耗内存产量的七成
根据最新报告,数据中心在 2026 年将消耗内存产量的最多七成。内存需求的指数级增长几乎肯定会冲击到汽车、电视和消费电子产品等众多行业。尽管汽车和消费电子产品等使用的是较老的内存类型,但内存制造商已经缩减或完全停产了旧内存芯片。报告称,2028 年的内存产能都售罄了,更别提今年了。几乎所有电子产品都需要内存,内存价格的暴涨将会导致本已经利润微薄的消费电子产品厂商将大部分上涨的成本转嫁给消费者——假如它们还有内存可用的话。IDC 更新了 2026 年的预测,预计智能手机销量将下滑 5%,PC 销量将下滑 9%——几个月后这些预测可能还会进一步调整。分析师表示这是有史以来内存行业最疯狂的时期。
- 伊朗对互联网的封锁进入第 12 天
根据 Cloudflare Rader 和 Netblocks 的监测,伊朗仍然处于基本断网状态,对互联网的封锁已进入第 12 天。在这此前,伊朗的流量有关短暂的恢复,但很快又时断时续,显示伊朗政府正在测试一个信息严格过滤的国内互联网。根据官方的数据,此前的大规模抗议导致了逾五千人死亡,逾两万人被捕。非官方数据认为死亡人数可能超过万人。
- 保时捷 2025 年在欧洲卖出的电车比油车多
保时捷上周宣布,2025 年该公司在欧洲销售的汽车中电动版本销量超过了燃油版本。保时捷在欧洲销售的汽车插电版本占到了 57.9%。该公司最畅销的车型是 Macan,它有电动版本和燃油版本,在全世界的总销量为 84,328 辆,其中电动版本为 45,367 辆,占到了 53.8%。即使在美国市场,Macan 的电动版本占到总销量的三分之一。美国是唯一一个电动汽车销量下降的主要汽车市场,去年的电动汽车销量占到了总销量的大约 10%。
- 微软释出紧急更新修复无法关机的 Bug
微软在 2026 年释出的首个 Windows 11 更新就被发现存在了多个严重问题,以至于它被迫释出紧急更新修复问题。紧急更新修复了多个 bug,包括 Windows 11 23H2 PC 无法正常关机;远程桌面 Remote Desktop 的登陆问题。微软还透露了其它问题,Outlook Classic 使用 POP 帐户时崩溃,该 bug 尚未修复;Windows 随机出现黑屏问题,桌面会冻结一两秒,屏幕变黑,然后恢复正常,该问题可能是更新本身或 GPU 驱动的兼容性问题导致的。
- OzLabs 成员全部脱离 IBM
OzLabs 是一个澳大利亚自由软件开发者组织,该组织成员负责了很多知名的开源项目如 Samba、rsync、Linux PPP、Linux netfilter、Linux Advanced Power Management (APM) 和 OpenBMC。该组织成立于 1999 年,由 Linuxcare 聘请 Andrew Tridgell 负责组建。Linuxcare 在 2001 年的动荡导致大部分成员都离开了,他们加入了 IBM Linux Technology Center 从事 PowerPC Linux 及相关项目。但截至 2026 年 1 月,所有成员都离开了 IBM,结束了 OzLabs 与 IBM 长达 25 年的合作关系。
- 好莱坞单一文化的兴衰
在流媒体和社交媒体的推荐算法时代,人们的注意力很少会被少数几部作品所捕获,曾经引领时代吸引无数人关注的好莱坞作品越来越少见了。1939 年的《乱世佳人》售出了 2 亿张电影票,而当时的美国人口仅为 1.3 亿。《陆军野战医院(M*A*S*H)》在 1983 年播出最后一集时吸引了逾一亿人观看。2025 年只有三部美国电影的票房收入超过 10 亿美元,而 2019 年这一数字为九部。YouTube 之所以能成为电视上最受欢迎的视频平台,不是因为它拥有最热门的节目,而是因为它能满足所有人需求。互联网打破了好莱坞对发行渠道的垄断。
- Threads 移动端日活人数超过 X
Similarweb 最新数据显示,Threads 移动端日活用户数超越了 X。而 X 在 Web 端仍然远超 Threads,日访问量约 1.5 亿,而 Threads 的日访问量为 850 万。Similarweb 的数据显示,截至 2026 年 1 月 7 日 Threads 在 iOS 和 Android 平台上的日活用户数达到 1.415 亿,而 X 在移动设备上的日活用户数为 1.25 亿。Threads 日活增长受益于在 Meta 旗下社交平台如 Facebook 和 Instagram 上的推广,以及对内容创作者的重视和新功能的快速推出。Threads 过去一年新增了兴趣社区、完善筛选功能、私信、长文本、阅后即焚等功能,最近还在测试游戏功能。
- 英伟达被控主动联络安娜的档案以高速下载盗版书库
英伟达除了供应 AI 芯片外,还开发了自己的大模型,如 NeMo、Retro-48B、InstructRetro 和 Megatron。那么这些大模型的训练数据来自何处?图书作者指控英伟达使用盗版书库训练模型。上周五原告修改了诉状,指控英伟达使用了影子图书馆“安娜的档案(Anna’s Archive)”收集的盗版电子书库。诉状援引英伟达内部邮件和文件称,英伟达员工主动联系“安娜的档案”,询问该影子图书馆提供的付费“高速访问”是什么意思。安娜的档案要求英伟达管理层内部批准之后它才会提供该服务。英伟达据报道在一周内批准了这一要求,安娜的档案随后提供了 500 TB 电子书的高速访问。英伟达还被控从 LibGen、Sci-Hub 和 Z-Library 下载书籍。
- Oxfam 报告称全球财富不平等创新高
乐施会(Oxfam)发表年度报告《Resisting the Rule of the Rich》,称全球财富不平等在加速。2025 年亿万富翁的财富激增 2.5 万亿美元,几乎相当于全球半数人口(约 41 亿人)的财富总和。全球亿万富翁人数首次突破 3000 人,世界首富马斯克(Elon Musk)的财富也首次突破 5000 亿美元。报告警告,超级富豪正形成新的寡头政治,他们利用巨额财富收买政治、媒体和司法以维护自身财富,瓦解和摧毁进步政策,剥夺我们的基本公民权利和政治权利。报告举例说,贝佐斯收购《华盛顿邮报》,马斯克收购 Twitter/X,黄馨祥(Patrick Soon-Shion)控制《洛杉矶时报》,法国极右翼亿万富翁 Vincent Bollore 拥有 CNews。乐施会呼吁全世界人民联合起来捍卫自身权利,争取一个取代不平等和寡头政治的替代方案。
- 10 分钟高强度运动有助于降低癌症风险
澳大利亚纽卡斯尔大学的研究人员发现,仅 10 分钟的高强度运动就能提高血液中几种小分子的水平。这些分子能够启动 DNA 修复机制,并关闭癌症生长信号。其中许多分子具有减轻炎症、维护血管健康和促进新陈代谢的作用。这些快速的变化似乎能够抑制肠道癌细胞的生长,同时还能加快受损DNA的修复速度。当科学家在实验室中将含有这些运动驱动分子的血液暴露于肠癌细胞时,他们观察到了广泛的基因变化——超过 1300 个基因的活动发生了改变,包括那些参与DNA修复、能量生产和癌细胞生长的基因。研究表明,运动通过血液发送分子信号,影响控制肿瘤生长和遗传稳定性的基因。这些结果进一步证明,保持身体活动是癌症预防的重要组成部分。研究团队发现,运动增加了支持线粒体能量代谢的基因活性,这有助于细胞更有效地利用氧气。与此同时,与快速细胞分裂相关的基因活性出现下调,可能使癌细胞侵袭性降低。运动后采集的血液还增强了DNA修复能力,激活了一个名为 PNKP 的关键修复基因。