Weekly Digest — 2025-W51
133 unique stories (2025-12-15 → 2025-12-21), aggregated across 8 sources.
Hacker News(42)
- Upcoming Changes to Let's Encrypt Certificates (community.letsencrypt.org)
- “Super secure” messaging app leaks everyone's phone number (ericdaigle.ca)
- Problems with D-Bus on the Linux desktop (blog.vaxry.net)
- US Tech Force (techforce.gov)
- Pro-democracy HK tycoon Jimmy Lai convicted in national security trial (www.bbc.com)
- Thousands of U.S. farmers have Parkinson's. They blame a deadly pesticide (www.mlive.com)
- Ty: A fast Python type checker and LSP (astral.sh)
- AI will make formal verification go mainstream (martin.kleppmann.com)
- Thin desires are eating life (www.joanwestenberg.com)
- GPT Image 1.5 (openai.com)
- No Graphics API (www.sebastianaaltonen.com)
- The GitHub Actions control plane is no longer free (www.blacksmith.sh)
GitHub Trending(27)
- simstudioai / sim
Open-source platform to build and deploy AI agent workflows.
- ZJU-LLMs / Foundations-of-LLMs
A book for Learning the Foundations of LLMs
- jellyfin / jellyfin-desktop
Jellyfin Desktop Client
- shadcn-ui / ui
A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.
- CopilotKit / CopilotKit
React UI + elegant infrastructure for AI Copilots, AI chatbots, and in-app AI agents. The Agentic Frontend 🪁
- obsproject / obs-studio
OBS Studio - Free and open source software for live streaming and screen recording
- virattt / ai-hedge-fund
An AI Hedge Fund Team
- thedotmack / claude-mem
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
- Morganamilo / paru
Feature packed AUR helper
- C4illin / ConvertX
💾 Self-hosted online file converter. Supports 1000+ formats ⚙️
- resemble-ai / chatterbox
SoTA open-source TTS
- Free-TV / IPTV
M3U Playlist for free TV channels
Hugging Face(30)
- EgoX: Egocentric Video Generation from a Single Exocentric Video
Egocentric perception enables humans to experience and understand the world directly from their own point of view. Translating exocentric (third-person) videos into egocentric (first-person) videos opens up new possibilities for immersive understanding but remains highly challenging due to extreme camera pose variations and minimal view overlap. This task requires faithfully preserving visible content while synthesizing unseen regions in a geometrically consistent manner. To achieve this, we present EgoX, a novel framework for generating egocentric videos from a single exocentric input. EgoX leverages the pretrained spatio temporal knowledge of large-scale video diffusion models through lightweight LoRA adaptation and introduces a unified conditioning strategy that combines exocentric and egocentric priors via width and channel wise concatenation. Additionally, a geometry-guided self-attention mechanism selectively attends to spatially relevant regions, ensuring geometric coherence and high visual fidelity. Our approach achieves coherent and realistic egocentric video generation while demonstrating strong scalability and robustness across unseen and in-the-wild videos.
- DentalGPT: Incentivizing Multimodal Complex Reasoning in Dentistry
Reliable interpretation of multimodal data in dentistry is essential for automated oral healthcare, yet current multimodal large language models (MLLMs) struggle to capture fine-grained dental visual details and lack sufficient reasoning ability for precise diagnosis. To address these limitations, we present DentalGPT, a specialized dental MLLM developed through high-quality domain knowledge injection and reinforcement learning. Specifically, the largest annotated multimodal dataset for dentistry to date was constructed by aggregating over 120k dental images paired with detailed descriptions that highlight diagnostically relevant visual features, making it the multimodal dataset with the most extensive collection of dental images to date. Training on this dataset significantly enhances the MLLM's visual understanding of dental conditions, while the subsequent reinforcement learning stage further strengthens its capability for multimodal complex reasoning. Comprehensive evaluations on intraoral and panoramic benchmarks, along with dental subsets of medical VQA benchmarks, show that DentalGPT achieves superior performance in disease classification and dental VQA tasks, outperforming many state-of-the-art MLLMs despite having only 7B parameters. These results demonstrate that high-quality dental data combined with staged adaptation provides an effective pathway for building capable and domain-specialized dental MLLMs.
- SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder
Visual generation grounded in Visual Foundation Model (VFM) representations offers a highly promising unified pathway for integrating visual understanding, perception, and generation. Despite this potential, training large-scale text-to-image diffusion models entirely within the VFM representation space remains largely unexplored. To bridge this gap, we scale the SVG (Self-supervised representations for Visual Generation) framework, proposing SVG-T2I to support high-quality text-to-image synthesis directly in the VFM feature domain. By leveraging a standard text-to-image diffusion pipeline, SVG-T2I achieves competitive performance, reaching 0.75 on GenEval and 85.78 on DPG-Bench. This performance validates the intrinsic representational power of VFMs for generative tasks. We fully open-source the project, including the autoencoder and generation model, together with their training, inference, evaluation pipelines, and pre-trained weights, to facilitate further research in representation-driven visual generation.
- V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties
Large-scale video generation models have shown remarkable potential in modeling photorealistic appearance and lighting interactions in real-world scenes. However, a closed-loop framework that jointly understands intrinsic scene properties (e.g., albedo, normal, material, and irradiance), leverages them for video synthesis, and supports editable intrinsic representations remains unexplored. We present V-RGBX, the first end-to-end framework for intrinsic-aware video editing. V-RGBX unifies three key capabilities: (1) video inverse rendering into intrinsic channels, (2) photorealistic video synthesis from these intrinsic representations, and (3) keyframe-based video editing conditioned on intrinsic channels. At the core of V-RGBX is an interleaved conditioning mechanism that enables intuitive, physically grounded video editing through user-selected keyframes, supporting flexible manipulation of any intrinsic modality. Extensive qualitative and quantitative results show that V-RGBX produces temporally consistent, photorealistic videos while propagating keyframe edits across sequences in a physically plausible manner. We demonstrate its effectiveness in diverse applications, including object appearance editing and scene-level relighting, surpassing the performance of prior methods.
- Sliding Window Attention Adaptation
The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at https://github.com/yuyijiong/sliding-window-attention-adaptation
- PersonaLive! Expressive Portrait Image Animation for Live Streaming
Current diffusion-based portrait animation models predominantly focus on enhancing visual quality and expression realism, while overlooking generation latency and real-time performance, which restricts their application range in the live streaming scenario. We propose PersonaLive, a novel diffusion-based framework towards streaming real-time portrait animation with multi-stage training recipes. Specifically, we first adopt hybrid implicit signals, namely implicit facial representations and 3D implicit keypoints, to achieve expressive image-level motion control. Then, a fewer-step appearance distillation strategy is proposed to eliminate appearance redundancy in the denoising process, greatly improving inference efficiency. Finally, we introduce an autoregressive micro-chunk streaming generation paradigm equipped with a sliding training strategy and a historical keyframe mechanism to enable low-latency and stable long-term video generation. Extensive experiments demonstrate that PersonaLive achieves state-of-the-art performance with up to 7-22x speedup over prior diffusion-based portrait animation models.
- ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding
Autoregressive models (ARMs) are hindered by slow sequential inference. While masked diffusion models (MDMs) offer a parallel alternative, they suffer from critical drawbacks: high computational overhead from precluding Key-Value (KV) caching, and incoherent generation arising from learning dependencies over an intractable space of token combinations. To address these limitations, we introduce ReFusion, a novel masked diffusion model that achieves superior performance and efficiency by elevating parallel decoding from the token level to a higher slot level, where each slot is a fixed-length, contiguous sub-sequence. This is achieved through an iterative ``plan-and-infill'' decoding process: a diffusion-based planning step first identifies a set of weakly dependent slots, and an autoregressive infilling step then decodes these selected slots in parallel. The slot-based design simultaneously unlocks full KV cache reuse with a unified causal framework and reduces the learning complexity from the token combination space to a manageable slot-level permutation space. Extensive experiments on seven diverse benchmarks show that ReFusion not only overwhelmingly surpasses prior MDMs with 34% performance gains and an over 18times speedup on average, but also bridges the performance gap to strong ARMs while maintaining a 2.33times average speedup.
- Towards Scalable Pre-training of Visual Tokenizers for Generation
The quality of the latent space in visual tokenizers (e.g., VAEs) is crucial for modern generative models. However, the standard reconstruction-based training paradigm produces a latent space that is biased towards low-level information, leading to a foundation flaw: better pixel-level accuracy does not lead to higher-quality generation. This implies that pouring extensive compute into visual tokenizer pre-training translates poorly to improved performance in generation. We identify this as the ``pre-training scaling problem`` and suggest a necessary shift: to be effective for generation, a latent space must concisely represent high-level semantics. We present VTP, a unified visual tokenizer pre-training framework, pioneering the joint optimization of image-text contrastive, self-supervised, and reconstruction losses. Our large-scale study reveals two principal findings: (1) understanding is a key driver of generation, and (2) much better scaling properties, where generative performance scales effectively with compute, parameters, and data allocated to the pretraining of the visual tokenizer. After large-scale pre-training, our tokenizer delivers a competitive profile (78.2 zero-shot accuracy and 0.36 rFID on ImageNet) and 4.1 times faster convergence on generation compared to advanced distillation methods. More importantly, it scales effectively: without modifying standard DiT training specs, solely investing more FLOPS in pretraining VTP achieves 65.8\% FID improvement in downstream generation, while conventional autoencoder stagnates very early at 1/10 FLOPS. Our pre-trained models are available at https://github.com/MiniMax-AI/VTP.
- Memory in the Age of AI Agents
Memory has emerged, and will continue to remain, a core capability of foundation model-based agents. As research on agent memory rapidly expands and attracts unprecedented attention, the field has also become increasingly fragmented. Existing works that fall under the umbrella of agent memory often differ substantially in their motivations, implementations, and evaluation protocols, while the proliferation of loosely defined memory terminologies has further obscured conceptual clarity. Traditional taxonomies such as long/short-term memory have proven insufficient to capture the diversity of contemporary agent memory systems. This work aims to provide an up-to-date landscape of current agent memory research. We begin by clearly delineating the scope of agent memory and distinguishing it from related concepts such as LLM memory, retrieval augmented generation (RAG), and context engineering. We then examine agent memory through the unified lenses of forms, functions, and dynamics. From the perspective of forms, we identify three dominant realizations of agent memory, namely token-level, parametric, and latent memory. From the perspective of functions, we propose a finer-grained taxonomy that distinguishes factual, experiential, and working memory. From the perspective of dynamics, we analyze how memory is formed, evolved, and retrieved over time. To support practical development, we compile a comprehensive summary of memory benchmarks and open-source frameworks. Beyond consolidation, we articulate a forward-looking perspective on emerging research frontiers, including memory automation, reinforcement learning integration, multimodal memory, multi-agent memory, and trustworthiness issues. We hope this survey serves not only as a reference for existing work, but also as a conceptual foundation for rethinking memory as a first-class primitive in the design of future agentic intelligence.
- QwenLong-L1.5: Post-Training Recipe for Long-Context Reasoning and Memory Management
We introduce QwenLong-L1.5, a model that achieves superior long-context reasoning capabilities through systematic post-training innovations. The key technical breakthroughs of QwenLong-L1.5 are as follows: (1) Long-Context Data Synthesis Pipeline: We develop a systematic synthesis framework that generates challenging reasoning tasks requiring multi-hop grounding over globally distributed evidence. By deconstructing documents into atomic facts and their underlying relationships, and then programmatically composing verifiable reasoning questions, our approach creates high-quality training data at scale, moving substantially beyond simple retrieval tasks to enable genuine long-range reasoning capabilities. (2) Stabilized Reinforcement Learning for Long-Context Training: To overcome the critical instability in long-context RL, we introduce task-balanced sampling with task-specific advantage estimation to mitigate reward bias, and propose Adaptive Entropy-Controlled Policy Optimization (AEPO) that dynamically regulates exploration-exploitation trade-offs. (3) Memory-Augmented Architecture for Ultra-Long Contexts: Recognizing that even extended context windows cannot accommodate arbitrarily long sequences, we develop a memory management framework with multi-stage fusion RL training that seamlessly integrates single-pass reasoning with iterative memory-based processing for tasks exceeding 4M tokens. Based on Qwen3-30B-A3B-Thinking, QwenLong-L1.5 achieves performance comparable to GPT-5 and Gemini-2.5-Pro on long-context reasoning benchmarks, surpassing its baseline by 9.90 points on average. On ultra-long tasks (1M~4M tokens), QwenLong-L1.5's memory-agent framework yields a 9.48-point gain over the agent baseline. Additionally, the acquired long-context reasoning ability translates to enhanced performance in general domains like scientific reasoning, memory tool using, and extended dialogue.
- LongVie 2: Multimodal Controllable Ultra-Long Video World Model
Building video world models upon pretrained video generation systems represents an important yet challenging step toward general spatiotemporal intelligence. A world model should possess three essential properties: controllability, long-term visual quality, and temporal consistency. To this end, we take a progressive approach-first enhancing controllability and then extending toward long-term, high-quality generation. We present LongVie 2, an end-to-end autoregressive framework trained in three stages: (1) Multi-modal guidance, which integrates dense and sparse control signals to provide implicit world-level supervision and improve controllability; (2) Degradation-aware training on the input frame, bridging the gap between training and long-term inference to maintain high visual quality; and (3) History-context guidance, which aligns contextual information across adjacent clips to ensure temporal consistency. We further introduce LongVGenBench, a comprehensive benchmark comprising 100 high-resolution one-minute videos covering diverse real-world and synthetic environments. Extensive experiments demonstrate that LongVie 2 achieves state-of-the-art performance in long-range controllability, temporal coherence, and visual fidelity, and supports continuous video generation lasting up to five minutes, marking a significant step toward unified video world modeling.
- Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows
We introduce a finance & accounting benchmark (Finch) for evaluating AI agents on real-world, enterprise-grade professional workflows -- interleaving data entry, structuring, formatting, web search, cross-file retrieval, calculation, modeling, validation, translation, visualization, and reporting. Finch is sourced from authentic enterprise workspaces at Enron (15,000 spreadsheets and 500,000 emails from 150 employees) and other financial institutions, preserving in-the-wild messiness across multimodal artifacts (text, tables, formulas, charts, code, and images) and spanning diverse domains such as budgeting, trading, and asset management. We propose a workflow construction process that combines LLM-assisted discovery with expert annotation: (1) LLM-assisted, expert-verified derivation of workflows from real-world email threads and version histories of spreadsheet files, and (2) meticulous expert annotation for workflows, requiring over 700 hours of domain-expert effort. This yields 172 composite workflows with 384 tasks, involving 1,710 spreadsheets with 27 million cells, along with PDFs and other artifacts, capturing the intrinsically messy, long-horizon, knowledge-intensive, and collaborative nature of real-world enterprise work. We conduct both human and automated evaluations of frontier AI systems including GPT 5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%. Comprehensive case studies further surface the challenges that real-world enterprise workflows pose for AI agents.
Solidot(34)
- 内存 SSD 之后机械硬盘也涨价
内存、固态硬盘之后,机械硬盘过去几个月也开始涨价。10~12 月用于台式电脑和监控摄像头的 3.5 英寸 1TB 产品比前一季度上涨 4% 达到 53.0 美元左右。用于笔记本电脑的 2.5 英寸 1TB 产品也上涨 3% 至 50.0 美元左右。涨幅创 2023 年 10~12 月以来的新高。中国加速采购 PC 用 HDD,主要用于监控摄像头。机械硬盘价格上涨预计会持续一段时间。
- GNOME Shell 扩展禁止使用 AI 生成
由于涌入了大量 AI 生成的 GNOME Shell 扩展,GNOME 项目宣布将拒绝接受此类扩展。开发者表示将 AI 用于辅助学习编程或作为代码补全等开发工具使用并不禁止,但扩展开发者应在合理范围内解释和说明其提交的代码。如果提交的代码包含大量不必要的代码、不一致代码风格和虚构 API 使用等任何表明代码由 AI 生成的迹象都将被拒绝。GNOME 开发者称部分开发者在使用 AI 时并不理解代码。
- 《时代》今年的年度人物是 AI 缔造者
《时代》今年的年度人物是 AI 时代的主要建筑师——英伟达 CEO 黄仁宇、AMD CEO 苏姿丰、xAI CEO 马斯克(Elon Musk)、Meta CEO 扎克伯格、OpenAI CEO 奥特曼(Sam Altman)、有 AI 教母之称的李飞飞、Anthropic CEO Dario Amodei 以及 Google AI CEO Demis Hassabis。《时代》称,不管好坏,这些人主导了今年的新闻头条,他们开启了机器智能时代,令世人惊叹担忧,他们改变了现状和超越了可能。
- 丹麦计划严格限制 15 岁以下青少年使用社交媒体
在澳大利亚之后,丹麦计划严格限制 15 岁以下青少年使用社交媒体。丹麦政府已与议会中三个执政联盟和两个反对党达成协议,计划最早在 2026 年中期成为法律。拟议措施将赋予部分家长允许其子女从 13 岁起使用社交媒体的权利,但完整计划尚未公布。丹麦数字事务部上个月宣布推出名为“数字证据(digital evidence)”的全新应用,预计明年春季上线,很可能是计划的核心。这款应用将显示年龄证明确保用户遵守社交媒体的年龄限制。马来西亚和挪威也在采取措施。
- iRobot 申请破产重组
扫地机器人 Roomba 的制造商 iRobot 申请破产重组。根据重组协议,iRobot 的控制权将转交给其主要代工厂 深圳杉川机器人公司(Shenzhen PICEA Robotics)及其子公司香港杉川(Santrum Hong Kong)。iRobot 的主营业务因中国制造的扫地机器人的竞争而陷入困境,该公司曾试图出售给电商巨人亚马逊,但没有获得欧盟监管机构的批准。2021 年它的估值曾高达 35.6 亿美元,如今仅为 1.4 亿美元。iRobot 大部分销往美国的产品在越南制造,而美国对越南商品征收 46% 的进口关税,导致其今年的成本增加了 2300 万美元。iRobot 创办 35 年以来制造了逾 5000 万台扫地机器人。
- CEO 们计划 2026 年继续加大 AI 支出
咨询公司 Teneo 调查了逾 350 位上市公司 CEO。这些上市公司的年收入都超过 10 亿美元。调查显示,68% 的CEO 计划在 2026 年增加 AI 支出,受访者同时表示目前的 AI 项目只有不到一半产生了超过支出的回报。CEO 们称,AI 在市场营销和客服领域应用最成功,在安全、法律和 HR 等高风险领域面临挑战。Teneo 还调查了约 400 家机构投资者,53% 预计 AI 项目将在六个月内投资开始产生回报。84% 的大型公司——年收入 100 亿美元或以上——CEO 认为 AI 项目的投资需要逾六个月时间才能产生回报。此外 67% 的 CEO 认为 AI 将增加公司入门级员工人数,58% 的 CEO 认为 AI 将增加领导层人数。
- 美国 Z 世代再次青睐实体媒介
2010 年代以来流媒体支配了媒体消费,实体媒介如 DVD 和 CD 的销量随之下滑。但实体媒介仍然在流通,最近几年部分类型的实体媒介的销量还出现了反弹。推动这一趋势的驱动力量之一是 Z 世代,流媒体服务日益昂贵,而 3-5 美元的实体 DVD 可能比购买数字版更便宜,而且能真正拥有所有权。实体 CD 交易平台 Discogs 今年的销量比去年同期增长 8%。行业组织 Digital Entertainment Group 的数据显示,DVD、蓝光和 4K 超高清蓝光光盘的销量在第三季度比去年同期下降 3%,而去年同期则下降了近 26%。美国唱片行业协会 RIAA 预测,2024 年 CD 销量将同比增长 1.5%。黑胶唱片的销量则在 2023 年就超过了 CD。
- 火星对地球气候有显著影响
地球气候在数百万年间于冰河期与温暖期之间反覆摆荡,主要原因来自于地球轨道参数与自转轴倾角的微小变化。这类长期变动在地球科学上统称为米兰科维奇循环(Milankovitch cycles),反映了地球持续受到其他行星的引力扰动。行星间的引力交互作用,会缓慢改变地球的轨道离心率、自转轴倾角以及岁差方向,进而调节地表接收的太阳辐射量,塑造大尺度的气候模式。过往研究已确认木星与金星在此过程中扮演关键角色。最新的精细数值分析显示,质量相对较小的火星,对地球气候模式同样具有显著且先前被低估的影响。研究团队透过计算机模拟,系统性地将火星质量由零变化至现值的十倍,并追踪其对地球轨道参数在数百万年尺度上的气候影响,结果显示火星是决定地球季节性与气候变化的重要成员。模拟显示,主导冰河期与温暖期转换约 10 万年循环直接受到火星影响。地球自转轴倾角亦受火星重力扰动的直接影响。地质纪录中常见的 4.1 万年倾角循环,随火星质量增加而显著延长;若火星质量为现值的十倍,倾角循环的周期将延长至约 4.5~5.5 万年,足以大幅改变南、北半球冰盖的生成与消融时序。
- 科学家发现决定黄瓜雌性的关键基因
与动物不同,植物的性别并非与生俱来,而是受基因、激素水平、环境信号的调控,复杂性远超动物。性别决定在农业生产中有广泛应用价值。对于以种子和果实为收获对象的作物,增加雌花可以提高产量;对于观赏园艺作物,如银杏树,可通过控制雌雄比例来满足不同需求;在杂交育种中,利用纯雌系可以避免去雄工序,节约成本。中国农业大学的科学家发现关键基因 CsARF3 在生长素和乙烯激素之间搭建桥梁,精准调控黄瓜的性别决定。实验发现当 CsARF3 被编辑突变后,黄瓜植株不再产生雌花,全部变为雄花;当该基因过表达时,雌花数量显著增加。更重要的是,即使外施生长素也无法挽回突变体的表型。这证明 CsARF3 是生长素信号通路中不可或缺的关键环节。
- 中印等国 IT 业入门级工作大幅减少
随着 AI 取代入门级工作,中印等国 IT 业入门级工作大幅减少。印度一所顶尖工程学院的一名学生称,400 名同学中不到四分之一获得了工作机会,校园弥漫着恐慌气氛。中国、迪拜和肯尼亚的工程学院学生面临类似的现象。曾由应届生承担的任务如调试、测试和日常软件维护,如今正日益自动化。SignalFire 的一份报告显示,过去三年大型科技公司招聘的应届生人数减少半数以上。尽管 2024 年招聘人数有回升但新员工只有 7% 是应届生。安永上月发布的报告显示,由于自动化和 AI 印度 IT 服务公司将入门级职位减少了 20%-25%。LinkedIn、Indeed 和 Eures 等求职平台显示 2024 年欧盟主要国家初级技术职位减少 35%。
- 欧盟软化 2035 年内燃机汽车禁令
欧盟放宽其原定于 2035 年实施的内燃机汽车禁令,允许少量低排放汽车继续销售。欧盟原计划到 2035 年新车的排放量必须为零,或比 2021 年的水平降低 100%。现在放宽为降低 90%,为插电混动汽车留出了空间——此类汽车配备电动机和内燃机,无需寻找充电站即可利用内燃机为电池充电。欧盟官员表示此举不会影响欧盟 27 个成员国在 2050 年实现经济碳中和的目标。
- 四成 fMRI 信号与实际大脑活动不一致
功能磁共振成像(fMRI)过去三十年被广泛用于大脑研究,但根据发表在《Nature Neuroscience》上的一项研究,fMRI 信号存在大量的噪音,四成 fMRI 信号与实际大脑活动不一致。研究人员发现,40% 的案例中 fMRI 信号增强与脑活动减弱相关。在脑活动增强的区域,fMRI 信号反而减弱。研究结果表明,磁共振成像(MRI)测量的氧含量与神经元活动之间不存在普遍有效的耦合。这项发现从根本上挑战了以往对 fMRI 数据与神经元活动之间关系的解读方式。