Weekly Digest — 2026-W13
222 unique stories (2026-03-23 → 2026-03-29), aggregated across 8 sources.
Hacker News(42)
- Autoresearch on an old research idea (ykumar.me)
- US and TotalEnergies reach 'nearly $1B' deal to end offshore wind projects (www.lemonde.fr)
- Is it a pint? (isitapint.com)
- Cyber.mil serving file downloads using TLS certificate which expired 3 days ago (www.cyber.mil)
- If DSPy is so great, why isn't anyone using it? (skylarbpayne.com)
- iPhone 17 Pro Demonstrated Running a 400B LLM (twitter.com)
- GitHub is once again down (www.githubstatus.com)
- Is anybody else bored of talking about AI? (blog.jakesaunders.dev)
- Epic Games to cut more than 1k jobs as Fortnite usage falls (www.reuters.com)
- Wine 11 rewrites how Linux runs Windows games at kernel with massive speed gains (www.xda-developers.com)
- Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised (github.com)
- Arm AGI CPU (newsroom.arm.com)
GitHub Trending(28)
- FujiwaraChoki / MoneyPrinterV2
- bytedance / deer-flow
- Crosstalk-Solutions / project-nomad
- vxcontrol / pentagi
- browser-use / browser-use
- TauricResearch / TradingAgents
- pascalorg / editor
- supermemoryai / supermemory
- harry0703 / MoneyPrinterTurbo
- mvanhorn / last30days-skill
- BerriAI / litellm
- letta-ai / claude-subconscious
Product Hunt(42)
- Claude Usage Tracker
See exactly how much you spend on Claude, across every tool
- Zoer.ai
Build full-stack webapps from the database up
- Honestly
Real reviews from Reddit & YouTube when shopping online
- Pause.do
Interrupt scrolling, tab overload, and AI autopilot
- Tobira.ai
A network where AI agents find deals for their humans
- Fastlane
Tinder for Marketing
- Flux
Fix production bugs by replaying them locally
- What The Duck!
Duck Hunt but with your finger and custom targets
- Redbean
Bring your original characters to life
- NextPhone
24/7 AI answering service for service-based businesses
- Kitty Points Leaderboard
Find interesting community members and see how you stack up
- Drift
AI agent to run robot simulations faster and reliably
Hugging Face(31)
- HopChain: Multi-Hop Data Synthesis for Generalizable Vision-Language Reasoning
VLMs show strong multimodal capabilities, but they still struggle with fine-grained vision-language reasoning. We find that long CoT reasoning exposes diverse failure modes, including perception, reasoning, knowledge, and hallucination errors, which can compound across intermediate steps. However, most existing vision-language data used for RLVR does not involve complex reasoning chains that rely on visual evidence throughout, leaving these weaknesses largely unexposed. We therefore propose HopChain, a scalable framework for synthesizing multi-hop vision-language reasoning data specifically for RLVR training of VLMs. Each synthesized multi-hop query forms a logically dependent chain of instance-grounded hops, where earlier hops establish the instances, sets, or conditions needed for later hops, while the final answer remains a specific, unambiguous number suitable for verifiable rewards. We add the multi-hop data synthesized by HopChain to the original RLVR data used to train Qwen3.5-35B-A3B and Qwen3.5-397B-A17B, and compare against RLVR on the original RLVR data alone across 24 benchmarks spanning STEM and Puzzle, General VQA, Text Recognition and Document Understanding, and Video Understanding. Although this multi-hop data is not synthesized to target any specific benchmark, adding it improves 20 out of 24 benchmarks on both models, indicating broad and generalizable gains. To demonstrate that full chained queries are important, we replace them with half-multi-hop or single-hop variants, reducing the 24-benchmark average accuracy by 5.3 and 7.0 points, respectively. Multi-hop training also strengthens long-CoT vision-language reasoning, with gains peaking at more than 50 accuracy points in the ultra-long-CoT regime. These experiments establish HopChain as an effective, scalable framework for synthesizing multi-hop data that improves generalizable vision-language reasoning.
- Astrolabe: Steering Forward-Process Reinforcement Learning for Distilled Autoregressive Video Models
Distilled autoregressive (AR) video models enable efficient streaming generation but frequently misalign with human visual preferences. Existing reinforcement learning (RL) frameworks are not naturally suited to these architectures, typically requiring either expensive re-distillation or solver-coupled reverse-process optimization that introduces considerable memory and computational overhead. We present Astrolabe, an efficient online RL framework tailored for distilled AR models. To overcome existing bottlenecks, we introduce a forward-process RL formulation based on negative-aware fine-tuning. By contrasting positive and negative samples directly at inference endpoints, this approach establishes an implicit policy improvement direction without requiring reverse-process unrolling. To scale this alignment to long videos, we propose a streaming training scheme that generates sequences progressively via a rolling KV-cache, applying RL updates exclusively to local clip windows while conditioning on prior context to ensure long-range coherence. Finally, to mitigate reward hacking, we integrate a multi-reward objective stabilized by uncertainty-aware selective regularization and dynamic reference updates. Extensive experiments demonstrate that our method consistently enhances generation quality across multiple distilled AR video models, serving as a robust and scalable alignment solution.
- TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation
Vision-language models (VLMs) have shown promise in earth observation (EO), yet they struggle with tasks that require grounding complex spatial reasoning in precise pixel-level visual representations. To address this problem, we introduce TerraScope, a unified VLM that delivers pixel-grounded geospatial reasoning with two key capabilities: (1) modality-flexible reasoning: it handles single-modality inputs (optical or SAR) and adaptively fuses different modalities into the reasoning process when both are available; (2) multi-temporal reasoning: it integrates temporal sequences for change analysis across multiple time points. In addition, we curate Terra-CoT, a large-scale dataset containing 1 million samples with pixel-level masks embedded in reasoning chains across multiple sources. We also propose TerraScope-Bench, the first benchmark for pixel-grounded geospatial reasoning with six sub-tasks that evaluates both answer accuracy and mask quality to ensure authentic pixel-grounded reasoning. Experiments show that TerraScope significantly outperforms existing VLMs on pixel-grounded geospatial reasoning while providing interpretable visual evidence.
- ProactiveBench: Benchmarking Proactiveness in Multimodal Large Language Models
Effective collaboration begins with knowing when to ask for help. For example, when trying to identify an occluded object, a human would ask someone to remove the obstruction. Can MLLMs exhibit a similar "proactive" behavior by requesting simple user interventions? To investigate this, we introduce ProactiveBench, a benchmark built from seven repurposed datasets that tests proactiveness across different tasks such as recognizing occluded objects, enhancing image quality, and interpreting coarse sketches. We evaluate 22 MLLMs on ProactiveBench, showing that (i) they generally lack proactiveness; (ii) proactiveness does not correlate with model capacity; (iii) "hinting" at proactiveness yields only marginal gains. Surprisingly, we found that conversation histories and in-context learning introduce negative biases, hindering performance. Finally, we explore a simple fine-tuning strategy based on reinforcement learning: its results suggest that proactiveness can be learned, even generalizing to unseen scenarios. We publicly release ProactiveBench as a first step toward building proactive multimodal models.
- FlowScene: Style-Consistent Indoor Scene Generation with Multimodal Graph Rectified Flow
Scene generation has extensive industrial applications, demanding both high realism and precise control over geometry and appearance. Language-driven retrieval methods compose plausible scenes from a large object database, but overlook object-level control and often fail to enforce scene-level style coherence. Graph-based formulations offer higher controllability over objects and inform holistic consistency by explicitly modeling relations, yet existing methods struggle to produce high-fidelity textured results, thereby limiting their practical utility. We present FlowScene, a tri-branch scene generative model conditioned on multimodal graphs that collaboratively generates scene layouts, object shapes, and object textures. At its core lies a tight-coupled rectified flow model that exchanges object information during generation, enabling collaborative reasoning across the graph. This enables fine-grained control of objects' shapes, textures, and relations while enforcing scene-level style coherence across structure and appearance. Extensive experiments show that FlowScene outperforms both language-conditioned and graph-conditioned baselines in terms of generation realism, style consistency, and alignment with human preferences.
- The Y-Combinator for LLMs: Solving Long-Context Rot with λ-Calculus
LLMs are increasingly used as general-purpose reasoners, but long inputs remain bottlenecked by a fixed context window. Recursive Language Models (RLMs) address this by externalising the prompt and recursively solving subproblems. Yet existing RLMs depend on an open-ended read-eval-print loop (REPL) in which the model generates arbitrary control code, making execution difficult to verify, predict, and analyse. We introduce λ-RLM, a framework for long-context reasoning that replaces free-form recursive code generation with a typed functional runtime grounded in λ-calculus. It executes a compact library of pre-verified combinators and uses neural inference only on bounded leaf subproblems, turning recursive reasoning into a structured functional program with explicit control flow. We show that λ-RLM admits formal guarantees absent from standard RLMs, including termination, closed-form cost bounds, controlled accuracy scaling with recursion depth, and an optimal partition rule under a simple cost model. Empirically, across four long-context reasoning tasks and nine base models, λ-RLM outperforms standard RLM in 29 of 36 model-task comparisons, improves average accuracy by up to +21.9 points across model tiers, and reduces latency by up to 4.1x. These results show that typed symbolic control yields a more reliable and efficient foundation for long-context reasoning than open-ended recursive code generation. The complete implementation of λ-RLM, is open-sourced for the community at: https://github.com/lambda-calculus-LLM/lambda-RLM.
- Omni-WorldBench: Towards a Comprehensive Interaction-Centric Evaluation for World Models
Video--based world models have emerged along two dominant paradigms: video generation and 3D reconstruction. However, existing evaluation benchmarks either focus narrowly on visual fidelity and text--video alignment for generative models, or rely on static 3D reconstruction metrics that fundamentally neglect temporal dynamics. We argue that the future of world modeling lies in 4D generation, which jointly models spatial structure and temporal evolution. In this paradigm, the core capability is interactive response: the ability to faithfully reflect how interaction actions drive state transitions across space and time. Yet no existing benchmark systematically evaluates this critical dimension. To address this gap, we propose Omni--WorldBench, a comprehensive benchmark specifically designed to evaluate the interactive response capabilities of world models in 4D settings. Omni--WorldBench comprises two key components: Omni--WorldSuite, a systematic prompt suite spanning diverse interaction levels and scene types; and Omni--Metrics, an agent-based evaluation framework that quantifies world modeling capabilities by measuring the causal impact of interaction actions on both final outcomes and intermediate state evolution trajectories. We conduct extensive evaluations of 18 representative world models across multiple paradigms. Our analysis reveals critical limitations of current world models in interactive response, providing actionable insights for future research. Omni-WorldBench will be publicly released to foster progress in interactive 4D world modeling.
- Speed by Simplicity: A Single-Stream Architecture for Fast Audio-Video Generative Foundation Model
We present daVinci-MagiHuman, an open-source audio-video generative foundation model for human-centric generation. daVinci-MagiHuman jointly generates synchronized video and audio using a single-stream Transformer that processes text, video, and audio within a unified token sequence via self-attention only. This single-stream design avoids the complexity of multi-stream or cross-attention architectures while remaining easy to optimize with standard training and inference infrastructure. The model is particularly strong in human-centric scenarios, producing expressive facial performance, natural speech-expression coordination, realistic body motion, and precise audio-video synchronization. It supports multilingual spoken generation across Chinese (Mandarin and Cantonese), English, Japanese, Korean, German, and French. For efficient inference, we combine the single-stream backbone with model distillation, latent-space super-resolution, and a Turbo VAE decoder, enabling generation of a 5-second 256p video in 2 seconds on a single H100 GPU. In automatic evaluation, daVinci-MagiHuman achieves the highest visual quality and text alignment among leading open models, along with the lowest word error rate (14.60%) for speech intelligibility. In pairwise human evaluation, it achieves win rates of 80.0% against Ovi 1.1 and 60.9% against LTX 2.3 over 2000 comparisons. We open-source the complete model stack, including the base model, the distilled model, the super-resolution model, and the inference codebase.
- LongCat-Flash-Prover: Advancing Native Formal Reasoning via Agentic Tool-Integrated Reinforcement Learning
We introduce LongCat-Flash-Prover, a flagship 560-billion-parameter open-source Mixture-of- Experts (MoE) model that advances Native Formal Reasoning in Lean4 through agentic tool-integrated reasoning (TIR). We decompose the native formal reasoning task into three independent formal capabilities, i.e., auto-formalization, sketching, and proving. To facilitate these capabilities, we propose a Hybrid-Experts Iteration Framework to expand high-quality task trajectories, including generating a formal statement based on a given informal problem, producing a whole-proof directly from the statement, or a lemma-style sketch. During agentic RL, we present a Hierarchical Importance Sampling Policy Optimization (HisPO) algorithm, which aims to stabilize the MoE model training on such long-horizon tasks. It employs a gradient masking strategy that accounts for the policy staleness and the inherent train-inference engine discrepancies at both sequence and token levels. Additionally, we also incorporate theorem consistency and legality detection mechanisms to eliminate reward hacking issues. Extensive evaluations show that our LongCat-Flash-Prover sets a new state-of-the-art for open-weights models in both auto-formalization and theorem proving. Demonstrating remarkable sample efficiency, it achieves a 97.1% pass rate on MiniF2F-Test using only 72 inference budget per problem. On more challenging benchmarks, it solves 70.8% of ProverBench and 41.5% of PutnamBench with no more than 220 attempts per problem, significantly outperforming existing open-weights baselines.
- Look Where It Matters: High-Resolution Crops Retrieval for Efficient VLMs
Vision-language models (VLMs) typically process images at a native high-resolution, forcing a trade-off between accuracy and computational efficiency: high-resolution inputs capture fine details but incur significant computational costs, while low-resolution inputs advocate for efficiency, they potentially miss critical visual information, like small text. We present AwaRes, a spatial-on-demand framework that resolves this accuracy-efficiency trade-off by operating on a low-resolution global view and using tool-calling to retrieve only high-resolution segments needed for a given query. We construct supervised data automatically: a judge compares low- vs.\ high-resolution answers to label whether cropping is needed, and an oracle grounding model localizes the evidence for the correct answer, which we map to a discrete crop set to form multi-turn tool-use trajectories. We train our framework with cold-start SFT followed by multi-turn GRPO with a composite reward that combines semantic answer correctness with explicit crop-cost penalties. Project page: https://nimrodshabtay.github.io/AwaRes
- OpenResearcher: A Fully Open Pipeline for Long-Horizon Deep Research Trajectory Synthesis
Training deep research agents requires long-horizon trajectories that interleave search, evidence aggregation, and multi-step reasoning. However, existing data collection pipelines typically rely on proprietary web APIs, making large-scale trajectory synthesis costly, unstable, and difficult to reproduce. We present OpenResearcher, a reproducible pipeline that decouples one-time corpus bootstrapping from multi-turn trajectory synthesis and executes the search-and-browse loop entirely offline using three explicit browser primitives: search, open, and find, over a 15M-document corpus. Using GPT-OSS-120B as the teacher model, we synthesize over 97K trajectories, including a substantial long-horizon tail with 100+ tool calls. Supervised fine-tuning a 30B-A3B backbone on these trajectories achieves 54.8\% accuracy on BrowseComp-Plus, a +34.0 point improvement over the base model, while remaining competitive on BrowseComp, GAIA, and xbench-DeepSearch. Because the environment is offline and fully instrumented, it also enables controlled analysis, where our study reveals practical insights into deep research pipeline design, including data filtering strategies, agent configuration choices, and how retrieval success relates to final answer accuracy. We release the pipeline, synthesized trajectories, model checkpoints, and the offline search environment at https://github.com/TIGER-AI-Lab/OpenResearcher.
- VideoDetective: Clue Hunting via both Extrinsic Query and Intrinsic Relevance for Long Video Understanding
Long video understanding remains challenging for multimodal large language models (MLLMs) due to limited context windows, which necessitate identifying sparse query-relevant video segments. However, existing methods predominantly localize clues based solely on the query, overlooking the video's intrinsic structure and varying relevance across segments. To address this, we propose VideoDetective, a framework that integrates query-to-segment relevance and inter-segment affinity for effective clue hunting in long-video question answering. Specifically, we divide a video into various segments and represent them as a visual-temporal affinity graph built from visual similarity and temporal proximity. We then perform a Hypothesis-Verification-Refinement loop to estimate relevance scores of observed segments to the query and propagate them to unseen segments, yielding a global relevance distribution that guides the localization of the most critical segments for final answering with sparse observation. Experiments show our method consistently achieves substantial gains across a wide range of mainstream MLLMs on representative benchmarks, with accuracy improvements of up to 7.5% on VideoMME-long. Our code is available at https://videodetective.github.io/
Techmeme(42)
- Crunchyroll is investigating a breach after hackers claimed they accessed a support agent's account and stole the personal information of ~6.8M users (Lawrence Abrams/BleepingComputer)
Lawrence Abrams / BleepingComputer : Crunchyroll is investigating a breach after hackers claimed they accessed a support agent's account and stole the personal information of ~6.8M users — Popular anime streaming platform Crunchyroll is investigating a breach after hackers claimed to have stolen personal information for approximately 6.8 million people.
- Drone delivery startup Zipline raised an additional $200M, including from Paradigm, bringing Zipline's Series H, originally announced in January, to $800M (Kirsten Korosec/TechCrunch)
Kirsten Korosec / TechCrunch : Drone delivery startup Zipline raised an additional $200M, including from Paradigm, bringing Zipline's Series H, originally announced in January, to $800M — U.S. autonomous drone delivery and logistics startup Zipline has raised another $200 million, adding to a recent funding round originally announced in January.
- Kalshi announces new guardrails to preemptively block politicians, athletes, and others from trading in their relevant markets (Nathan Bomey/Axios)
Nathan Bomey / Axios : Kalshi announces new guardrails to preemptively block politicians, athletes, and others from trading in their relevant markets — Prediction market Kalshi plans to block athletes, coaches and officials from betting on their sports and to block political candidates from trading on their campaigns, Axios has learned.
- Meta hires the team behind Dreamer, which lets users create AI agents, including Hugo Barra, former Stripe CTO David Singleton, and designer Nicholas Jitkoff (Kurt Wagner/Bloomberg)
Kurt Wagner / Bloomberg : Meta hires the team behind Dreamer, which lets users create AI agents, including Hugo Barra, former Stripe CTO David Singleton, and designer Nicholas Jitkoff — Meta Platforms Inc. has hired the founders and team behind the artificial intelligence startup Dreamer, which launched earlier …
- The US plans to create a voluntary consortium of countries to invest $4T to secure supply chains for chips, energy, and minerals; the US will contribute $250M (New York Times)
New York Times : The US plans to create a voluntary consortium of countries to invest $4T to secure supply chains for chips, energy, and minerals; the US will contribute $250M — Trump officials said on Monday that the war in Iran had emphasized the need to reduce vulnerabilities for energy and technology.
- Doc: Kalshi and Polymarket CEOs are investing in a VC fund, led by two early Kalshi employees, that is raising up to $35M to back prediction market startups (Ben Weiss/Fortune)
Ben Weiss / Fortune : Doc: Kalshi and Polymarket CEOs are investing in a VC fund, led by two early Kalshi employees, that is raising up to $35M to back prediction market startups — The CEOs of Kalshi and Polymarket are locked in a brutal fight to dominate the white-hot prediction market sector.
- At a hearing, a US federal judge says the Pentagon's treatment of Anthropic is "troubling" and that "it looks like an attempt to cripple Anthropic" (Maria Curi/Axios)
Maria Curi / Axios : At a hearing, a US federal judge says the Pentagon's treatment of Anthropic is “troubling” and that “it looks like an attempt to cripple Anthropic” — A federal judge on Tuesday called the Pentagon's treatment of Anthropic “troubling” as the AI company urged the court …
- Disney ends its partnership with OpenAI, signed in December 2025, in which it pledged to invest $1B and agreed to license some characters to Sora (Todd Spangler/Variety)
Todd Spangler / Variety : Disney ends its partnership with OpenAI, signed in December 2025, in which it pledged to invest $1B and agreed to license some characters to Sora — OpenAI said it will discontinue Sora, the generative-AI video creation platform it launched last year, without providing a reason for the decision.
- A New Mexico jury finds that Meta violated state laws by failing to safeguard its platforms from child predators and orders it to pay $375M in damages (Jonathan Vanian/CNBC)
Jonathan Vanian / CNBC : A New Mexico jury finds that Meta violated state laws by failing to safeguard its platforms from child predators and orders it to pay $375M in damages — A jury has reached a verdict in a major New Mexico trial in which the state's attorney general alleged that Meta failed to safeguard its family of apps from child predators.
- Arm says its AGI CPU offers up to 136 Neoverse V3 cores, 6GB/s memory bandwidth per core, and more than 2x performance per rack compared with x86 systems (VideoCardz.com)
VideoCardz.com : Arm says its AGI CPU offers up to 136 Neoverse V3 cores, 6GB/s memory bandwidth per core, and more than 2x performance per rack compared with x86 systems — Arm launches AGI CPU, its first data center chip — Arm has announced the AGI CPU, its first production silicon product and its first Arm-designed data center CPU.
- Sam Altman told staff he has ceded oversight of OpenAI's safety and security teams to focus on fundraising, supply chains, and building data centers at scale (The Information)
The Information : Sam Altman told staff he has ceded oversight of OpenAI's safety and security teams to focus on fundraising, supply chains, and building data centers at scale — OpenAI CEO Sam Altman has relinquished direct oversight of the company's safety and security teams so he can focus on raising capital …
- OpenAI plans to discontinue products that use its Sora models, including its consumer app, a Sora version for developers, and a video feature inside ChatGPT (Berber Jin/Wall Street Journal)
Berber Jin / Wall Street Journal : OpenAI plans to discontinue products that use its Sora models, including its consumer app, a Sora version for developers, and a video feature inside ChatGPT — The app, released last year, allowed people to insert themselves into famous movie scenes, among other functions
Solidot(37)
- 石油能源危机推动向可再生能源的转型
霍尔木兹海峡的封锁导致世界再次面临严重的石油能源危机。全球约五分之一石油和液化天然气的运输是经过霍尔木兹海峡,此次危机受影响最大的是亚洲地区。与之前的石油危机不同的是,在很多国家可再生能源已能与化石燃料展开竞争。两大人口大国中国和印度都扩大了可再生能源规模,中国仍然依赖燃煤发电,其可再生能源的规模远超印度。国际能源署的数据显示,中国约十分之一的汽车是电动汽车。中国仍然是世界最大的原油进口国,也是伊朗石油的最大买家。但通过可再生能源实现部分经济领域的电气化,中国已降低了对进口石油的依赖。如果没有这种转变,中国受到影响会更显著。印度目前正面临烹饪用燃气短缺问题,燃气短缺促使居民去抢购电磁炉。太阳能和风能只占日本能源产出的 11%,与印度持平,低于中国的 18%。巴基斯坦加速发展太阳能使该国自 2020 年以来减少进口化石燃料逾 120 亿美元。孟加拉国能源储备有限,该国已关闭大学以节省用电,政府开始实行燃料配给制。
- 烟头会在环境中停留十年以上
根据发表在《Environmental Pollution》期刊上的一项研究,烟头不会完全从环境中消失。由于分解缓慢而且会释放出有毒物质,烟头构成了长期的环境危害。研究人员调查了烟头在十年中的分解过程,发现在富氮条件下烟头的质量会在十年里减少 84%。烟头的分解分为四个过程,初始阶段出现一个峰值,然后在中期再次出现一个峰值,显示旧烟头会带来持续的生态风险。
- 三星 Galaxy S26 支持 AirDrop
在 Google Pixel 10 系列手机之后,三星宣布其 Galaxy S26 系列手机正式支持 AirDrop。Google Android 平台工程副总裁 Eric Kay 此前表示今年会有更多 Android 设备支持 AirDrop。苹果 iPhone 的 AirDrop 以及 Android 的 Quick Share 被用于快速在设备之间分享文件,Galaxy S26 系列包括三个型号 Galaxy S26、Galaxy S26 Plus 和 Galaxy S26 Ultra,对 AirDrop 的支持将首先在韩国推出,然后扩大到美国等其它地区。三星其它型号的智能手机未来也可能支持 AirDrop。
- 微软释出紧急更新修复微软账号登录问题
微软释出了紧急更新 KB5085516,修复三月例行安全更新 KB5079473 释出后出现的微软账号登录问题,该问题影响使用微软账号登录的应用如 Teams、OneDrive、Microsoft Edge、Microsoft 365 Copilot 以及 Excel 和 Word 等 Office 应用,当用户通过微软账号登录这些应用会返回错误信息,声称用户未连接到互联网。微软表示,使用 Microsoft Entra ID 登录的企业客户未受影响。
- 龙芯工程师将维护其 DRM 驱动
因无人维护,Linux 内核处理 LS7A/LS2K SoC 显示控制器的龙芯 DRM(Direct Rendering Manager)驱动被标记为“孤儿状态”。现在龙芯工程师在邮件列表上宣布接手维护工作。龙芯工程师 Jianmin Lv 和 Qianhai 都是龙芯的 GPU 研发工程师,负责内核驱动开发,他们有能力也有责任持续维护龙芯的 GPU 驱动,最小化对用户的影响,在内部讨论之后,团队推荐两人接手维护工作,推荐 Huacai、Mingcong 和 Ruoyao 三人协助。龙芯将根据芯片支持计划推出更新。
- 一篇推荐 RSS 阅读器的文章下载了 500 MB 的广告
在被算法控制和广告轰炸的时代,RSS 阅读器能让我们控制自己阅读的内容而再次受到青睐。PC Gamer 网站发表了一篇推荐 RSS 阅读器的文章,然而讽刺的是网页本身充斥着广告。除了多则弹出式窗口,网页初步完成加载后其大小达到了 37MB,但此后网页还会在后台持续加载广告,在五分钟内它加载了 500MB 的广告。这就是我们需要 RSS 阅读器摆脱这一切的原因。
- 【重磅推荐】2026 年度 NVIDIA 创业企业展示现已启动招募!
今年 3 月起,NVIDIA 将面向科创企业在全国陆续启动一系列企业展示活动,活动形式将包含路演,展位展示及大企业对接等多种形式。 【北京站】 4 月 23 日 北京站将深度解析 GTC2026 精彩内容和发布,聚焦物理AI、AI智能体、大语言模型应用等领域,探索 AI 的下一个篇章。参与形式包括:路演、展示、大企业和技术对接等。 【成都站】 5 月 15 日 成都站为 AI 应用和出海专场,NVIDIA 专家及行业嘉宾将带来 AI 出海、物理 AI、AI 智能体、AI 落地应用等精彩内容分享。 【上海站】 5 月 21 日 上海站将聚焦 AI 智能体、物理AI、大语言模型应用等领域,探索 AI 的应用场景。参与形式包括:路演、展示、大企业和技术对接等。 【澳门站】 5 月 26-30 日 澳门站为境外专场,结合澳门BEYOND 国际科技创新博览会,聚焦AI智能体、物理AI、企业出海等前沿技术领域和方向,涵盖#GTC26 技术精华解读、项目路演、圆桌讨论、投融资与需求对接等环节。报名企业将有机会获得免费BEYONDEXPO 展位。 诚邀您莅临现场,共同交流与探讨!报名可点击下方链接: 企业报名:https://jinshuju.com/f/MfGfIi?x_field_1=zhiding-wechat 观众报名:https://jinshuju.com/f/ZMKknG?x_field_1=Zhiding-WeChat
- Firefox 149 释出
Mozilla 于 3 月 24 日释出了 Firefox 149。主要变化包括:用 Rust 语言开发的 JPEG-XL 图像解码器 jxl-rs 取代了旧的用 C++ 开发的解码器;更快的 PDF 文件处理速度,通过右键上下文菜单从 PDF 中下载图像; 改进 HTTP/3 上传性能;内置 VPN(目前只提供给美国等少数地区),每月免费流量 50GB,等等。
- AI 促使源码进化还是导致它灭绝?
在基于大模型的 AI 辅助编程日益流行的时代,是否会出现为 AI 优化而不考虑人类可读性的编程语言?已有实验在尝试为提高大模型效率而最小化词元(tokens)。AI 是促使源码进化还是导致它灭绝?我们能否让 AI 直接从提示词生成一种中间语言然后将其输入到解释器或编译器?未来是否还需要高级语言?去年 10 月 IEEE Spectrum 召开了一个网络研讨会讨论了 AI 是否会导致编程语言消失。高级语言是人类使用的语言,我们完全可以让 AI 直接生成中间语言,而未来的程序员仍然可以做出与接口、算法以及其它架构相关的设计决策。最终生成的代码仍然需要通过测试,能解释它正在做什么。
- 亚马逊 AWS 位于巴林的数据中心第二次因无人机活动中断服务
亚马逊 AWS 位于巴林的数据中心因无人机活动中断服务,这是 AWS 本月第二次受到战争影响。亚马逊发言人确认问题是无人机活动造成的,但没有提供更多信息,不清楚其巴林设施是否直接遭到无人机袭击,还是附近区域遭到袭击。亚马逊表示正帮助客户迁移到其它 AWS 区域。亚马逊本月初表示其位于巴林和阿联酋的设施遭到无人机袭击,它当时表示由于结构受损预计恢复过程将比较长。
- 美国科学大国地位动摇
特朗普政府的反科学立场促使大量研究人员离开美国。数据显示,2025 年 1~8 月跨境流动的研究人员中,美国流出的份额上升到 11%,流入的份额下降到 15%。气候变化等领域的学者正在前往欧洲。流向西班牙、法国等欧洲国家和加拿大、韩国的人数增加。由于大学和企业吸引了世界上最优秀的人才,美国建立了科学超级大国的地位,成为创新和经济增长的源泉。但是特朗普第二届政府对名牌大学施加压力、削减科学技术预算、限制发放签证等移民政策、对气候变化和疫苗的怀疑论等政策正在导致研究人员“远离美国”。
- 2026 年阿贝尔奖授予了证明莫德尔猜想的 Gerd Faltings
2026 年阿贝尔奖授予了证明莫德尔猜想的德国数学家 Gerd Faltings。他在 1986 年 32 岁时因证明莫德尔猜想而获得了菲尔茨奖,该猜想后改名为 Faltings 定理。Faltings 定理研究的是曲线,曲线通常可以通过两个变量的加法和乘法组合而成的简单方程去描述。如果在坐标系中绘制方程的解,会形成直线、椭圆或更复杂的曲线。数学家一直在寻找这些解中一类特殊子集——“有理点”,即坐标为整数或分数的点。这些特殊点蕴含丰富而复杂的关系,隐藏着数学家试图揭示的秩序。然而曲线的数量是无限的,要确定所有曲线上的有理点似乎不可能——直到 Faltings 定理的出现。他证明如果一个曲线的方程中某个变量的幂次高于 3,那么这种曲线上的有理点数量必然是有限的。只有直线、二次曲线如圆和三次方程可能拥有无限多个有理点。其证明被视为算术几何的一大基石。