Weekly Digest — 2026-W19
180 unique stories (2026-05-04 → 2026-05-10), aggregated across 8 sources.
Hacker News(30)
- Microsoft Edge stores all passwords in memory in clear text, even when unused (twitter.com)
- Heat pump sales rise across Europe (www.pv-magazine.com)
- US healthcare marketplaces shared citizenship and race data with ad tech giants (techcrunch.com)
- Days without GitHub incidents (www.dayswithoutgithubincident.com)
- Stop big tech from making users behave in ways they don't want to (economist.com)
- I am worried about Bun (wwj.dev)
- .de TLD offline due to DNSSEC? (dnssec-analyzer.verisignlabs.com)
- California farmers to destroy 420k peach trees following Del Monte bankruptcy (www.sfgate.com)
- IBM didn't want Microsoft to use the Tab key to move between dialog fields (devblogs.microsoft.com)
- Computer Use is 45x more expensive than structured APIs (reflex.dev)
- Accelerating Gemma 4: faster inference with multi-token prediction drafters (blog.google)
- Three Inverse Laws of AI (susam.net)
GitHub Trending(19)
Product Hunt(36)
- Sleek Analytics for iOS
Your website analytics in your pocket
- Flowly
Your personal AI assistant, native to your desktop
- Claude Code & Codex Usage Trading Cards by Rudel
Get your trading card based on your CC & codex usage
- Dropy
Track prices on stores like Amazon, eBay, & AliExpress
- Replyke V7
Pre-Modeled Infra & Client SDKs for User-Powered Products.
- Mindra
Agent Teams You Can Actually Delegate To
- Flowstep 1.0
AI design engineer to turn your thoughts into editable UI
- Kilo Code v7 for VS Code
Parallel agents, diff reviewer, and multi-model comparisons
- Blaze
The AI-powered calendar that plans your day for you.
- Velo 2.0
Instantly turn your voice and screen into shareable videos
- Breathwrk
Learn and master breathwork with guided breathing exercises
- Tollecode
A local AI coding assistant to delegate tasks to AI agents
Hugging Face(30)
- UniVidX: A Unified Multimodal Framework for Versatile Video Generation via Diffusion Priors
Recent progress has shown that video diffusion models (VDMs) can be repurposed for diverse multimodal graphics tasks. However, existing methods often train separate models for each problem setting, which fixes the input-output mapping and limits the modeling of correlations across modalities. We present UniVidX, a unified multimodal framework that leverages VDM priors for versatile video generation. UniVidX formulates pixel-aligned tasks as conditional generation in a shared multimodal space, adapts to modality-specific distributions while preserving the backbone's native priors, and promotes cross-modal consistency during synthesis. It is built on three key designs. Stochastic Condition Masking (SCM) randomly partitions modalities into clean conditions and noisy targets during training, enabling omni-directional conditional generation instead of fixed mappings. Decoupled Gated LoRA (DGL) introduces per-modality LoRAs that are activated when a modality serves as the generation target, preserving the strong priors of the VDM. Cross-Modal Self-Attention (CMSA) shares keys and values across modalities while keeping modality-specific queries, facilitating information exchange and inter-modal alignment. We instantiate UniVidX in two domains: UniVid-Intrinsic, for RGB videos and intrinsic maps including albedo, irradiance, and normal; and UniVid-Alpha, for blended RGB videos and their constituent RGBA layers. Experiments show that both models achieve performance competitive with state-of-the-art methods across distinct tasks and generalize robustly to in-the-wild scenarios, even when trained on fewer than 1,000 videos. Project page: https://houyuanchen111.github.io/UniVidX.github.io/
- Web2BigTable: A Bi-Level Multi-Agent LLM System for Internet-Scale Information Search and Extraction
Agentic web search increasingly faces two distinct demands: deep reasoning over a single target, and structured aggregation across many entities and heterogeneous sources. Current systems struggle on both fronts. Breadth-oriented tasks demand schema-aligned outputs with wide coverage and cross-entity consistency, while depth-oriented tasks require coherent reasoning over long, branching search trajectories. We introduce Web2BigTable, a multi-agent framework for web-to-table search that supports both regimes. Web2BigTable adopts a bi-level architecture in which an upper-level orchestrator decomposes the task into sub-problems and lower-level worker agents solve them in parallel. Through a closed-loop run--verify--reflect process, the framework jointly improves decomposition and execution over time via persistent, human-readable external memory, with self-evolving updates to each single-agent. During execution, workers coordinate through a shared workspace that makes partial findings visible, allowing them to reduce redundant exploration, reconcile conflicting evidence, and adapt to emerging coverage gaps. Web2BigTable sets a new state of the art on WideSearch, reaching an Avg@4 Success Rate of 38.50 (7.5times the second best at 5.10), Row F1 of 63.53 (+25.03 over the second best), and Item F1 of 80.12 (+14.42 over the second best). It also generalises to depth-oriented search on XBench-DeepSearch, achieving 73.0 accuracy. Code is available at https://github.com/web2bigtable/web2bigtable.
- Map2World: Segment Map Conditioned Text to 3D World Generation
3D world generation is essential for applications such as immersive content creation or autonomous driving simulation. Recent advances in 3D world generation have shown promising results; however, these methods are constrained by grid layouts and suffer from inconsistencies in object scale throughout the entire world. In this work, we introduce a novel framework, Map2World, that first enables 3D world generation conditioned on user-defined segment maps of arbitrary shapes and scales, ensuring global-scale consistency and flexibility across expansive environments. To further enhance the quality, we propose a detail enhancer network that generates fine details of the world. The detail enhancer enables the addition of fine-grained details without compromising overall scene coherence by incorporating global structure information. We design the entire pipeline to leverage strong priors from asset generators, achieving robust generalization across diverse domains, even under limited training data for scene generation. Extensive experiments demonstrate that our method significantly outperforms existing approaches in user-controllability, scale consistency, and content coherence, enabling users to generate 3D worlds under more complex conditions.
- Prox-E: Fine-Grained 3D Shape Editing via Primitive-Based Abstractions
Text-based 2D image editing models have recently reached an impressive level of maturity, motivating a growing body of work that heavily depends on these models to drive 3D edits. While effective for appearance-based modifications, such 2D-centric 3D editing pipelines often struggle with fine-grained 3D editing, where localized structural changes must be applied while strictly preserving an object's overall identity. To address this limitation, we propose Prox-E, a training-free framework that enables fine-grained 3D control through an explicit, primitive-based geometric abstraction. Our framework first abstracts an input 3D shape into a compact set of geometric primitives. A pretrained vision-language model (VLM) then edits this abstraction to specify primitive-level changes. These structural edits are subsequently used to guide a 3D generative model, enabling fine-grained, localized modifications while preserving unchanged regions of the original shape. Through extensive experiments, we demonstrate that our method consistently balances identity preservation, shape quality, and instruction fidelity more effectively than various existing approaches, including 2D-based 3D editors and training-based methods.
- From Skill Text to Skill Structure: The Scheduling-Structural-Logical Representation for Agent Skills
LLM agents increasingly rely on reusable skills, capability packages that combine instructions, control flow, constraints, and tool calls. In most current agent systems, however, skills are still represented by text-heavy artifacts, including SKILL.md-style documents and structured records whose machine-usable evidence remains embedded largely in natural-language descriptions. This poses a challenge for skill-centered agent systems: managing skill collections and using skills to support agent both require reasoning over invocation interfaces, execution structure, and concrete side effects that are often entangled in a single textual surface. An explicit representation of skill knowledge may therefore help make these artifacts easier for machines to acquire and leverage. Drawing on Memory Organization Packets, Script Theory, and Conceptual Dependency from Schank and Abelson's classical work on linguistic knowledge representation, we introduce what is, to our knowledge, the first structured representation for agent skill artifacts that disentangles skill-level scheduling signals, scene-level execution structure, and logic-level action and resource-use evidence: the Scheduling-Structural-Logical (SSL) representation. We instantiate SSL with an LLM-based normalizer and evaluate it on a corpus of skills in two tasks, Skill Discovery and Risk Assessment, and superiorly outperform the text-only baselines: in Skill Discovery, SSL improves MRR from 0.573 to 0.707; in Risk Assessment, it improves macro F1 from 0.744 to 0.787. These findings reveal that explicit, source-grounded structure makes agent skills easier to search and review. They also suggest that SSL is best understood as a practical step toward more inspectable, reusable, and operationally actionable skill representations for agent systems, rather than as a finished standard or an end-to-end mechanism for managing and using skills.
- Stable-GFlowNet: Toward Diverse and Robust LLM Red-Teaming via Contrastive Trajectory Balance
Large Language Model (LLM) Red-Teaming, which proactively identifies vulnerabilities of LLMs, is an essential process for ensuring safety. Finding effective and diverse attacks in red-teaming is important, but achieving both is challenging. Generative Flow Networks (GFNs) that perform distribution matching are a promising methods, but they are notorious for training instability and mode collapse. In particular, unstable rewards in red-teaming accelerate mode collapse. We propose Stable-GFN (S-GFN), which eliminates partition function Z estimation in GFN and reduces training instability. S-GFN avoids Z-estimation through pairwise comparisons and employs a robust masking methodology against noisy rewards. Additionally, we propose a fluency stabilizer to prevent the model from getting stuck in local optima that produce gibberish. S-GFN provides more stable training while maintaining the optimal policy of GFN. We demonstrate the overwhelming attack performance and diversity of S-GFN across various settings.
- MolmoAct2: Action Reasoning Models for Real-world Deployment
Vision-Language-Action (VLA) models aim to provide a single generalist controller for robots, but today's systems fall short on the criteria that matter for real-world deployment. Frontier models are closed, open-weight alternatives are tied to expensive hardware, reasoning-augmented policies pay prohibitive latency for their grounding, and fine-tuned success rates remain below the threshold for dependable use. We present MolmoAct2, a fully open action reasoning model built for practical deployment, advancing its predecessor along five axes. We introduce MolmoER, a VLM backbone specialized for spatial and embodied reasoning, trained on a 3.3M-sample corpus with a specialize-then-rehearse recipe. We release three new datasets spanning low-to-medium cost platforms, including MolmoAct2-BimanualYAM, 720 hours of teleoperated bimanual trajectories that constitute the largest open bimanual dataset to date, together with quality-filtered Franka (DROID) and SO100/101 subsets. We provide OpenFAST, an open-weight, open-data action tokenizer trained on millions of trajectories across five embodiments. We redesign the architecture to graft a flow-matching continuous-action expert onto a discrete-token VLM via per-layer KV-cache conditioning. Finally, we propose MolmoThink, an adaptive-depth reasoning variant that re-predicts depth tokens only for scene regions that change between timesteps, retaining geometric grounding at a fraction of prior latency. In the most extensive empirical study of any open VLA to date, spanning 7 simulation and real-world benchmarks, MolmoAct2 outperforms strong baselines including Pi-05, while MolmoER surpasses GPT-5 and Gemini Robotics ER-1.5 across 13 embodied-reasoning benchmarks. We release model weights, training code, and complete training data. Project page: https://allenai.org/blog/molmoact2
- From Context to Skills: Can Language Models Learn from Context Skillfully?
Many real-world tasks require language models (LMs) to reason over complex contexts that exceed their parametric knowledge. This calls for context learning, where LMs directly learn relevant knowledge from the given context. An intuitive solution is inference-time skill augmentation: extracting the rules and procedures from context into natural-language skills. However, constructing such skills for context learning scenarios faces two challenges: the prohibitive cost of manual skill annotation for long, technically dense contexts, and the lack of external feedback for automated skill construction. In this paper, we propose Ctx2Skill, a self-evolving framework that autonomously discovers, refines, and selects context-specific skills without human supervision or external feedback. At its core, a multi-agent self-play loop has a Challenger that generates probing tasks and rubrics, a Reasoner that attempts to solve them guided by an evolving skill set, and a neutral Judge that provides binary feedback. Crucially, both the Challenger and the Reasoner evolve through accumulated skills: dedicated Proposer and Generator agents analyze failure cases and synthesize them into targeted skill updates for both sides, enabling automated skill discovery and refinement. To prevent adversarial collapse caused by increasingly extreme task generation and over-specialized skill accumulation, we further introduce a Cross-time Replay mechanism that identifies the skill set achieving the best balance across representative cases for the Reasoner side, ensuring robust and generalizable skill evolution. The resulting skills can be plugged into any language model to obtain better context learning capability. Evaluated on four context learning tasks from CL-bench, Ctx2Skill consistently improves solving rates across backbone models.
- Persistent Visual Memory: Sustaining Perception for Deep Generation in LVLMs
While autoregressive Large Vision-Language Models (LVLMs) demonstrate remarkable proficiency in multimodal tasks, they face a "Visual Signal Dilution" phenomenon, where the accumulation of textual history expands the attention partition function, causing visual attention to decay inversely with generated sequence length. To counteract this, we propose Persistent Visual Memory (PVM), a lightweight learnable module designed to ensure sustained, on-demand visual perception. Integrated as a parallel branch alongside the Feed-Forward Network (FFN) in LVLMs, PVM establishes a distance-agnostic retrieval pathway that directly provides visual embeddings for precise visual perception, thereby structurally mitigating the signal suppression inherent to deep generation. Extensive experiments on Qwen3-VL models demonstrate that PVM brings notable improvements with negligible parameter overhead, delivering consistent average accuracy gains across both 4B and 8B scales, particularly in complex reasoning tasks that demand persistent visual perception. Furthermore, in-depth analysis reveals that PVM can resist length-induced signal decay and accelerate internal prediction convergence.
- Repetition over Diversity: High-Signal Data Filtering for Sample-Efficient German Language Modeling
Recent research has shown that filtering massive English web corpora into high-quality subsets significantly improves training efficiency. However, for high-resource non-English languages like German, French, or Japanese, aggressive filtering creates a strategic dilemma: should practitioners prioritize diversity by training once on large amounts of lightly filtered web data, or prioritize quality by strictly filtering for a high-quality core and repeating it over multiple epochs? We investigate this trade-off for German by constructing hierarchical quality filters applied to 500M web documents, comparing multi-epoch training on the filtered subsets against single-pass training on a diverse corpus. Our experiments across multiple model scales and token budgets show that repeating high-quality data consistently outperforms single-pass training on larger, less filtered sets. Notably, the performance gap persists even after 7 epochs. Our findings suggest that for non-English LLMs, semantic concentration through quality filtering offers a more viable path to efficient language modeling than simply maximizing unique data volume. We release our German language models (called Boldt), as well as our cleaned evaluation benchmarks to the research community. Our experiments indicate that they achieve state-of-the-art results despite training on 10-360x fewer tokens than comparable models.
- OceanPile: A Large-Scale Multimodal Ocean Corpus for Foundation Models
The vast and underexplored ocean plays a critical role in regulating global climate and supporting marine biodiversity, yet artificial intelligence has so far delivered limited impact in this domain due to a fundamental data bottleneck. Specifically, ocean data are highly fragmented across disparate sources and inherently exhibit multi-modal, high-noise, and weakly labeled characteristics, lacking unified schemas and semantic alignment. Although Multimodal Large Language Models (MLLMs) have achieved remarkable success in general domains, their application to ocean science remains severely constrained by the absence of large-scale, well-aligned multimodal datasets tailored to marine environments. To bridge this gap, we introduce OceanPile, a large-scale multimodal corpus designed for ocean foundation models. It comprises three key components: OceanCorpus, a unified collection integrating sonar data, underwater imagery, marine science visuals, and scientific text from diverse authoritative sources; OceanInstruction, a high-quality instruction dataset synthesized via a novel pipeline guided by a hierarchical Ocean Concept Knowledge Graph; and OceanBenchmark, a manually curated evaluation benchmark for rigorous assessment. We establish a multi-stage quality control process to ensure scientific validity and alignment across modalities. Experimental validation demonstrates significant performance improvements for models trained on our data. All datasets are publicly released to advance the field of marine artificial intelligence and empower domain-specific MLLMs.
- Hallucinations Undermine Trust; Metacognition is a Way Forward
Despite significant strides in factual reliability, errors -- often termed hallucinations -- remain a major concern for generative AI, especially as LLMs are increasingly expected to be helpful in more complex or nuanced setups. Yet even in the simplest setting -- factoid question-answering with clear ground truth-frontier models without external tools continue to hallucinate. We argue that most factuality gains in this domain have come from expanding the model's knowledge boundary (encoding more facts) rather than improving awareness of that boundary (distinguishing known from unknown). We conjecture that the latter is inherently difficult: models may lack the discriminative power to perfectly separate truths from errors, creating an unavoidable tradeoff between eliminating hallucinations and preserving utility. This tradeoff dissolves under a different framing. If we understand hallucinations as confident errors -- incorrect information delivered without appropriate qualification -- a third path emerges beyond the answer-or-abstain dichotomy: expressing uncertainty. We propose faithful uncertainty: aligning linguistic uncertainty with intrinsic uncertainty. This is one facet of metacognition -- the ability to be aware of one's own uncertainty and to act on it. For direct interaction, acting on uncertainty means communicating it honestly; for agentic systems, it becomes the control layer governing when to search and what to trust. Metacognition is thus essential for LLMs to be both trustworthy and capable; we conclude by highlighting open problems for progress towards this objective.
Techmeme(36)
- Duolingo reports Q1 revenue up 27% YoY to $292M, vs. $288.5M est., bookings up 14% to $308.5M, and expects slower growth in Q2; DUOL drops 12%+ after hours (Akash Sriram/Reuters)
Akash Sriram / Reuters : Duolingo reports Q1 revenue up 27% YoY to $292M, vs. $288.5M est., bookings up 14% to $308.5M, and expects slower growth in Q2; DUOL drops 12%+ after hours — Duolingo (DUOL.O) posted strong first-quarter results but signaled a more measured growth trajectory ahead, as the language-learning …
- Musk v. Altman: Greg Brockman testifies that his OpenAI stake is now worth ~$30B; a Musk attorney asks why he hasn't donated $29B to OpenAI's nonprofit arm (Bloomberg)
Bloomberg : Musk v. Altman: Greg Brockman testifies that his OpenAI stake is now worth ~$30B; a Musk attorney asks why he hasn't donated $29B to OpenAI's nonprofit arm — OpenAI co-founder and President Greg Brockman testified that his stake in the startup is now worth almost $30 billion …
- Palantir reports Q1 revenue up 85% YoY to $1.63B, vs. $1.54B est., US government revenue up 84% to $687M, and US commercial revenue up 133% to $595M (Jaspreet Singh/Reuters)
Jaspreet Singh / Reuters : Palantir reports Q1 revenue up 85% YoY to $1.63B, vs. $1.54B est., US government revenue up 84% to $687M, and US commercial revenue up 133% to $595M — Palantir Technologies (PLTR.O) beat Wall Street estimates for first-quarter revenue on Monday, driven by rising demand for its data analytics software …
- Elon Musk agrees to pay $1.5M to settle SEC allegations that he cheated Twitter shareholders in 2022 by failing to disclose the 5%+ stake he had in the company (Nicola M White/Bloomberg)
Nicola M White / Bloomberg : Elon Musk agrees to pay $1.5M to settle SEC allegations that he cheated Twitter shareholders in 2022 by failing to disclose the 5%+ stake he had in the company — Elon Musk agreed to settle Securities and Exchange Commission allegations that he cheated Twitter shareholders out of millions …
- Pinterest reports Q1 revenue up 18% YoY to $1B, vs. $966M est., MAUs up 11% YoY to 631M, and forecasts Q2 revenue above estimates; PINS jumps 17%+ after hours (Jonathan Vanian/CNBC)
Jonathan Vanian / CNBC : Pinterest reports Q1 revenue up 18% YoY to $1B, vs. $966M est., MAUs up 11% YoY to 631M, and forecasts Q2 revenue above estimates; PINS jumps 17%+ after hours — Pinterest reported first-quarter earnings on Monday that beat on the top and bottom lines. Shares soared 15% after the report.
- Musk v. Altman: Stuart Russell, Musk's only AI expert witness, warned of AI risks but his concerns about AI's existential threats were excluded by the judge (Tim Fernholz/TechCrunch)
Tim Fernholz / TechCrunch : Musk v. Altman: Stuart Russell, Musk's only AI expert witness, warned of AI risks but his concerns about AI's existential threats were excluded by the judge — When do we take AI doomers seriously? — That's a key subtext of Elon Musk's attempt to shut down OpenAI's for-profit AI business.
- Kaspersky says Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates (Dan Goodin/Ars Technica)
Dan Goodin / Ars Technica : Kaspersky says Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates — Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates …
- EA reports Q4 net bookings up 3.6% YoY to $1.86B, vs. $2B est., weighed down by a post-launch drop-off in engagement for Battlefield 6 (Anhata Rooprai/Reuters)
Anhata Rooprai / Reuters : EA reports Q4 net bookings up 3.6% YoY to $1.86B, vs. $2B est., weighed down by a post-launch drop-off in engagement for Battlefield 6 — Videogame publisher Electronic Arts (EA.O) missed quarterly bookings estimates on Tuesday, weighed down by a post-launch drop-off in engagement for its …
- GlobalFoundries reports Q1 revenue up 3% YoY to $1.63B, in line with est., and forecasts Q2 revenue and adjusted earnings above estimates; GFS closes up 9.28% (Patrick Seitz/Investor's Business Daily)
Patrick Seitz / Investor's Business Daily : GlobalFoundries reports Q1 revenue up 3% YoY to $1.63B, in line with est., and forecasts Q2 revenue and adjusted earnings above estimates; GFS closes up 9.28% — Contract chipmaker GlobalFoundries (GFS) on Tuesday beat earnings estimates on in-line sales for the first quarter.
- Micron closes up 11% after announcing its highest-capacity SSD has started to ship, lifting its market cap past $700B for the first time; Sandisk closes up 12% (Lola Murti/CNBC)
Lola Murti / CNBC : Micron closes up 11% after announcing its highest-capacity SSD has started to ship, lifting its market cap past $700B for the first time; Sandisk closes up 12% — Micron's historic rally continued on Tuesday, with shares of the memory maker surging 11%, lifting the company's market cap past $700 billion for the first time.
- A US court sentences a Latvian national to 8.5 years for acting as a negotiator for Russia's Karakurt ransomware group (Sergiu Gatlan/BleepingComputer)
Sergiu Gatlan / BleepingComputer : A US court sentences a Latvian national to 8.5 years for acting as a negotiator for Russia's Karakurt ransomware group — A Latvian national extradited to the United States was sentenced to 8.5 years in prison for his “cold case” negotiator role in the Russian Karakurt ransomware group.
- Super Micro reports Q3 revenue up 123% YoY to $10.24B, vs. $12.33B est., forecasts Q4 revenue and adjusted profit above estimates; SMCI jumps 17%+ after hours (Juby Babu/Reuters)
Juby Babu / Reuters : Super Micro reports Q3 revenue up 123% YoY to $10.24B, vs. $12.33B est., forecasts Q4 revenue and adjusted profit above estimates; SMCI jumps 17%+ after hours — Super Micro Computer (SMCI.O) on Tuesday forecast fourth-quarter revenue above Wall Street estimates, banking on robust demand …
Solidot(29)
- 科学家发现咖啡如何影响肠道和大脑
根据发表在《Nature Communications》期刊上的一项研究,科学家发现常饮用含咖啡因和不含咖啡因的咖啡会影响肠道菌群,从而影响情绪和压力水平。研究人员对比了 31 名常饮用咖啡者和 31 名不喝咖啡者。常饮用咖啡者指的是每天饮用 3-5 杯咖啡的人。实验开始时,咖啡饮用者停止饮用咖啡两周。在此期间,研究人员持续收集生物样本监测心理健康状况。实验期间参与者并不知道自己饮用的是含咖啡因的咖啡还是不含咖啡因的咖啡。一半参与者饮用不含咖啡因的咖啡,另一半饮用普通咖啡。参与者都报告情绪有所改善,这一结果显示即使不含咖啡因咖啡也能改善情绪。研究还发现常饮用咖啡者有更高的埃格特菌属(Eggertella sp.)和短隐杆菌(Cryptobacterium curtum),更多的厚壁菌门(Firmicutes)。只有摄入不含咖啡因的人才表现出学习和记忆力的提升,而只有摄入咖啡因的参与者才体验到焦虑减轻以及注意力和警觉性提高。
- 天文学家发现 27 颗围绕双恒星运行的候选行星
天文学家发现了 27 颗围绕双恒星运行的候选行星,类似星球大战里的沙漠行星塔图因(Tatooine)。天文学家至今发现了 18 颗环双星行星,但类似环绕太阳运行的的单恒星行星则发现了逾 8000 颗。科学家以前通过凌日现象识别环双星行星,但需要在特定条件下才能观测到。现在他们采用了轨道进动(apsidal precession),寻找相互绕行且发生掩食的双星系统中轨道出现的摆动,这种摆动通常只能用存在第三个天体去解释。研究团队利用 NASA Transiting Exoplanet Survey Satellite 卫星收集的数据,从 1590 个恒星系统中识别出 36 个候选天体,其中 27 个天体可能具有行星质量。研究人员表示需要更多研究才能确定它们是否是环双星行星。
- VS Code 默认在 commit 中插入 Co-Authored-by Copilot
微软的编辑器 VS Code 被发现默认在 commit 中插入了 Co-Authored-by Copilot,不管用户有没有使用其 AI 助手 Copilot。此事再次在用户中引发了大量批评。微软开发者回应称他们将会在下个版本中解决默认启用的问题,称如果用户没有使用 AI 助手那么就不应该说代码是 Copilot 合作编写的。
- 中国三月绿色技术出口增长七成
因霍尔木兹海峡封锁引发的新一轮能源危机,世界各国正加速向清洁能源转型,最大的绿色技术出口国中国三月的太阳能、电池和电动汽车的出口总额同比增长 70%,其中出口的太阳能装机容量达到 68GW,电池出口额达到 100 亿美元,电动汽车和混合动力汽车出口同比增长 140%。多达 50 个国家从中国进口的太阳能设备都创历史新高。
- Steam 用户中使用 Linux 比例占 4.52%
2026 年 3 月 Steam 玩家中使用 Linux 比例达到了史无前例的 5.33%,比前一个月增加了一倍多。根据 Valve 公布的 2026 年 4 月 Steam 硬件和软件调查,Steam 用户中使用 Linux 比例回落到了 4.52%,减少 0.81%,但仍然比去年同期翻了一番。Windows 操作系统的比例提高到 93.47%,OSX 占 2.01%。有众多证据表明 Linux 上的游戏表现有了翻天覆地的变化,而 Linux 下游戏的一大特性是需要的资源比 Windows 更少,在今天内存价格飙升的时期显得更有吸引力。其它数据显示:简体中文用户比例占 23.41%,英语用户占 36.77% 。用户使用英特尔 CPU 的比例占 55.81%,AMD 占 44.18%,几乎和前一个月相同。
- 英国 NHS 以 AI 为由准备关闭所有开源库
日程安排平台 Cal.com 上月宣布从开源转为闭源,理由是 AI 工具更容易从开源代码中发现漏洞,而安全性依赖于模糊,因此闭源有助于提高安全。现在英国国家医疗服务体系(NHS)以相同的理由准备关闭它几乎所有的开源库,这一决定引发了广泛争议和批评。批评者指出 NHS 公布的大部分开源库是数据集、内部工具、指南、研究工具、前端设计等,它们不会因为安全扫描技术的进步而受到影响。此外是否开源对于 Anthropic Mythos 之类的 AI 工具并无区别,因为它们也能分析二进制程序并寻找漏洞。批评者发表了公开信,呼吁 NHS 保持其代码公开。
- MS Edge 被发现会在内存中明文加载所有密码
MS Edge 浏览器被发现启动时会在内存中明文加载其保存的所有密码。相比下 Chrome 只在需要时解密凭证,没有将所有密码保存在内存中。Edge 和 Chrome 都是基于开源的 Chromium。微软的做法让从内存中抓取重要数据变得更容易,也增加了共享环境下密码泄露的风险。安全研究人员将这一问题报告给了微软,收到的回应是该行为就是这么设计的。研究人员在 GitHub 上发布了概念演示工具 EdgeSavedPasswordsDumper。
- NetHack 释出 5.0 版本
有 39 年历史的 Roguelike 游戏 NetHack 释出 5.0 版本。NetHack 在 1987 年发布了最早的版本,名字中的 Net 指的是通过网络合作开发,Hack 则指的是角色扮演中的 hack 和 slash。玩家可以在游戏中扮演骑士、野蛮人、巫师、游侠、女武神、僧侣和武士等不同职业,目标是在地下城的最底层获取 Yendor 的项链并将其供奉给自己的神灵。NetHack 5.0 除了支持 Windows,还支持 MS-DOS 和 Amiga,此外它是完全开源的,可以编译在 Linux 等类 Unix 系统运行。NetHack 5.0 现在可以通过支持 C99 标准的编译器进行编译,使用 Lua 生成地下城,在游戏初期阶段新增了一个可选教程,等等。
- CNN 创始人 Ted Turner 去世,享年 87 岁
CNN 创始人 Ted Turner 去世,享年 87 岁。他创办的 CNN 以 24 小时实时播报全球新闻闻名,对电视新闻产生了革命性影响。CNN 创办于 1980 年 6 月 1 日,是第一个 24 小时播报新闻的有线电视网。1995 年 CNN 出售给了时代华纳,Turner 退出了电视行业,他一直称 CNN 是其一生最伟大的成就。
- 研究称吃鸡蛋有助于降低阿尔茨海默病风险
研究人员发现,每天吃一个鸡蛋,每周至少五天,可将患阿尔茨海默病风险降低最多 27%。每月吃 1-3 次鸡蛋可将风险降低 17%,每周吃 2-4 次鸡蛋可将风险降低 20%。研究人员称,鸡蛋能提供有益于大脑健康的关键营养素。鸡蛋提供胆碱,胆碱是乙酰胆碱和磷脂酰胆碱的前体,而乙酰胆碱和磷脂酰胆碱对记忆和突触功能至关重要。鸡蛋还含有叶黄素和玉米黄素——这些类胡萝卜素与认知能力的提高和氧化应激的降低有关。鸡蛋还含有重要的 ω-3 脂肪酸,蛋黄尤其富含磷脂,磷脂占鸡蛋总脂质的近 30%,对神经递质受体的功能至关重要。这项研究获得了美国鸡蛋委员会的资助。
- OpenAI 总裁被迫在法庭作证时阅读自己的个人日记
马斯克(Elon Musk)上周在法庭上作证指控 OpenAI 的另外两位联合创始人 Greg Brockman 和 Sam Altman 放弃创办时的其非营利使命以谋取个人私利。本周 Brockman 出庭作证,被迫在陪审团前阅读个人日记,似乎印证了马斯克的指控。Brockman 称他从学生时期就写日记,在职业生涯中通过写日记去思考重大决策。这些日记是在去年 10 月作为证据递交到法庭,今年 1 月解封。2017 年马斯克向 OpenAI 发出最后通牒,要么完全由他掌控 OpenAI 的营利性部门,要么 OpenAI 继续保持非营利性质。而 Brockman 同一时间在日记里畅谈了赚钱的好处。在 OpenAI 成立了不由马斯克掌控的营利性部门之后,Brockman 个人在 OpenAI 的股份如今价值 300 亿美元。他还在日记中纠结投票反对马斯克的计划或者投票支持将马斯克逐出董事会是否在道德上是错误的。他在日记中写道:“从他手中夺走这家非营利机构是错误的。在道德上是败坏的。”
- 奥斯卡奖拒绝 AI 演员和 AI 创作的剧本
负责评选奥斯卡奖的美国电影艺术与科学学院宣布,只有人类表演和人类创作的剧本才有资格获得奥斯卡奖提名。奥斯卡奖不会全面禁止 AI 工具的使用,但将根据人类是否在创意作品中仍然扮演核心角色去评判电影。电影艺术与科学学院表示,如果电影制作人在作品中使用了 AI 工具,此类工具既不会帮助也不会损害其获得提名的机会。这是电影学院首次明确奖项只颁发给人类的表演和人类创作的剧本。