Weekly Digest — 2026-W14
217 unique stories (2026-03-30 → 2026-04-05), aggregated across 8 sources.
Hacker News(42)
- Fedware: Government apps that spy harder than the apps they ban (www.sambent.com)
- Do your own writing (alexhwoods.com)
- New Washington state law bans noncompete agreements (www.seattletimes.com)
- Cherri – programming language that compiles to an Apple Shortuct (github.com)
- CodingFont: A game to help you pick a coding font (www.codingfont.com)
- FTC action against Match and OkCupid for deceiving users, sharing personal data (www.ftc.gov)
- OpenAI closes funding round at an $852B valuation (www.cnbc.com)
- GitHub's Historic Uptime (damrnelson.github.io)
- OkCupid gave 3M dating-app photos to facial recognition firm, FTC says (arstechnica.com)
- The Claude Code Source Leak: fake tools, frustration regexes, undercover mode (alex000kim.com)
- Italy blocks US use of Sicily air base for Middle East war (www.politico.eu)
- Tell HN: Chrome says "suspicious download" when trying to download yt-dlp
GitHub Trending(25)
Product Hunt(42)
- Letterbook
AI support platform built for founders
- Neuralingo Language Learning
slowly inch your way to mastery: try, fail, learn, get good
- VibeTalent
Find vibe coders who actually ship
- PopTask
Light menu bar task manager for quickly capturing tasks
- ClawKing
On-chain AI battle royale where 8 lobsters fight
- Goals
AI turns your goal into one daily action.
- Relacan
Your canvas becomes a website. Think, arrange, publish.
- Autoclaw
One-click Openclaw set up by Z.AI
- MacMonitor
Real-time Apple Silicon system monitor for your menu bar
- OpenClawCloud
The turn key OpenClaw solution with unlimited LLM tokens
- FireAPI
Discover, consume, and monetize APIs in one place
- IndieEvent
Meet Indie makers in your city
Hugging Face(31)
- Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models
Video world models have shown immense potential in simulating the physical world, yet existing memory mechanisms primarily treat environments as static canvases. When dynamic subjects hide out of sight and later re-emerge, current methods often struggle, leading to frozen, distorted, or vanishing subjects. To address this, we introduce Hybrid Memory, a novel paradigm requiring models to simultaneously act as precise archivists for static backgrounds and vigilant trackers for dynamic subjects, ensuring motion continuity during out-of-view intervals. To facilitate research in this direction, we construct HM-World, the first large-scale video dataset dedicated to hybrid memory. It features 59K high-fidelity clips with decoupled camera and subject trajectories, encompassing 17 diverse scenes, 49 distinct subjects, and meticulously designed exit-entry events to rigorously evaluate hybrid coherence. Furthermore, we propose HyDRA, a specialized memory architecture that compresses memory into tokens and utilizes a spatiotemporal relevance-driven retrieval mechanism. By selectively attending to relevant motion cues, HyDRA effectively preserves the identity and motion of hidden subjects. Extensive experiments on HM-World demonstrate that our method significantly outperforms state-of-the-art approaches in both dynamic subject consistency and overall generation quality.
- ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling
Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our
- PackForcing: Short Video Training Suffices for Long Video Sampling and Long Context Inference
Autoregressive video diffusion models have demonstrated remarkable progress, yet they remain bottlenecked by intractable linear KV-cache growth, temporal repetition, and compounding errors during long-video generation. To address these challenges, we present PackForcing, a unified framework that efficiently manages the generation history through a novel three-partition KV-cache strategy. Specifically, we categorize the historical context into three distinct types: (1) Sink tokens, which preserve early anchor frames at full resolution to maintain global semantics; (2) Mid tokens, which achieve a massive spatiotemporal compression (32x token reduction) via a dual-branch network fusing progressive 3D convolutions with low-resolution VAE re-encoding; and (3) Recent tokens, kept at full resolution to ensure local temporal coherence. To strictly bound the memory footprint without sacrificing quality, we introduce a dynamic top-k context selection mechanism for the mid tokens, coupled with a continuous Temporal RoPE Adjustment that seamlessly re-aligns position gaps caused by dropped tokens with negligible overhead. Empowered by this principled hierarchical context compression, PackForcing can generate coherent 2-minute, 832x480 videos at 16 FPS on a single H200 GPU. It achieves a bounded KV cache of just 4 GB and enables a remarkable 24x temporal extrapolation (5s to 120s), operating effectively either zero-shot or trained on merely 5-second clips. Extensive results on VBench demonstrate state-of-the-art temporal consistency (26.07) and dynamic degree (56.25), proving that short-video supervision is sufficient for high-quality, long-video synthesis. https://github.com/ShandaAI/PackForcing
- Trace2Skill: Distill Trajectory-Local Lessons into Transferable Agent Skills
Equipping Large Language Model (LLM) agents with domain-specific skills is critical for tackling complex tasks. Yet, manual authoring creates a severe scalability bottleneck. Conversely, automated skill generation often yields fragile or fragmented results because it either relies on shallow parametric knowledge or sequentially overfits to non-generalizable trajectory-local lessons. To overcome this, we introduce Trace2Skill, a framework that mirrors how human experts author skills: by holistically analyzing broad execution experience before distilling it into a single, comprehensive guide. Instead of reacting sequentially to individual trajectories, Trace2Skill dispatches a parallel fleet of sub-agents to analyze a diverse pool of executions. It extracts trajectory-specific lessons and hierarchically consolidates them into a unified, conflict-free skill directory via inductive reasoning. Trace2Skill supports both deepening existing human-written skills and creating new ones from scratch. Experiments in challenging domains, such as spreadsheet, VisionQA and math reasoning, show that Trace2Skill significantly improves upon strong baselines, including Anthropic's official xlsx skills. Crucially, this trajectory-grounded evolution does not merely memorize task instances or model-specific quirks: evolved skills transfer across LLM scales and generalize to OOD settings. For example, skills evolved by Qwen3.5-35B on its own trajectories improved a Qwen3.5-122B agent by up to 57.65 absolute percentage points on WikiTableQuestions. Ultimately, our results demonstrate that complex agent experience can be packaged into highly transferable, declarative skills -- requiring no parameter updates, no external retrieval modules, and utilizing open-source models as small as 35B parameters.
- MedOpenClaw: Auditable Medical Imaging Agents Reasoning over Uncurated Full Studies
Currently, evaluating vision-language models (VLMs) in medical imaging tasks oversimplifies clinical reality by relying on pre-selected 2D images that demand significant manual labor to curate. This setup misses the core challenge of realworld diagnostics: a true clinical agent must actively navigate full 3D volumes across multiple sequences or modalities to gather evidence and ultimately support a final decision. To address this, we propose MEDOPENCLAW, an auditable runtime designed to let VLMs operate dynamically within standard medical tools or viewers (e.g., 3D Slicer). On top of this runtime, we introduce MEDFLOWBENCH, a full-study medical imaging benchmark covering multi-sequence brain MRI and lung CT/PET. It systematically evaluates medical agentic capabilities across viewer-only, tool-use, and open-method tracks. Initial results reveal a critical insight: while state-of-the-art LLMs/VLMs (e.g., Gemini 3.1 Pro and GPT-5.4) can successfully navigate the viewer to solve basic study-level tasks, their performance paradoxically degrades when given access to professional support tools due to a lack of precise spatial grounding. By bridging the gap between static-image perception and interactive clinical workflows, MEDOPENCLAW and MEDFLOWBENCH establish a reproducible foundation for developing auditable, full-study medical imaging agents.
- RealChart2Code: Advancing Chart-to-Code Generation with Real Data and Multi-Task Evaluation
Vision-Language Models (VLMs) have demonstrated impressive capabilities in code generation across various domains. However, their ability to replicate complex, multi-panel visualizations from real-world data remains largely unassessed. To address this gap, we introduce \texttt{RealChart2Code}, a new large-scale benchmark with over 2,800 instances grounded in authentic datasets and featuring tasks with clear analytical intent. Crucially, it is the first benchmark to systematically evaluate chart generation from large-scale raw data and assess iterative code refinement in a multi-turn conversational setting. Our comprehensive evaluation of 14 leading VLMs on RealChart2Code reveals significant performance degradation compared to simpler benchmarks, highlighting their struggles with complex plot structures and authentic data. Our analysis uncovers a substantial performance gap between proprietary and open-weight models and confirms that even state-of-the-art VLMs often fail to accurately replicate intricate, multi-panel charts. These findings provide valuable insights into the current limitations of VLMs and guide future research directions. We release the benchmark and code at https://github.com/Speakn0w/RealChart2Code.
- TAPS: Task Aware Proposal Distributions for Speculative Sampling
Speculative decoding accelerates autoregressive generation by letting a lightweight draft model propose future tokens that a larger target model then verifies in parallel. In practice, however, draft models are usually trained on broad generic corpora, which leaves it unclear how much speculative decoding quality depends on the draft training distribution. We study this question with lightweight HASS and EAGLE-2 drafters trained on MathInstruct, ShareGPT, and mixed-data variants, evaluated on MT-Bench, GSM8K, MATH-500, and SVAMP. Measured by acceptance length, task-specific training yields clear specialization: MathInstruct-trained drafts are strongest on reasoning benchmarks, while ShareGPT-trained drafts are strongest on MT-Bench. Mixed-data training improves robustness, but larger mixtures do not dominate across decoding temperatures. We also study how to combine specialized drafters at inference time. Naive checkpoint averaging performs poorly, whereas confidence-based routing improves over single-domain drafts and merged-tree verification yields the highest acceptance length overall for both backbones. Finally, confidence is a more useful routing signal than entropy: rejected tokens tend to have higher entropy, but confidence produces much clearer benchmark-level routing decisions. These results show that speculative decoding quality depends not only on draft architecture, but also on the match between draft training data and downstream workload, and that specialized drafters are better combined at inference time than in weight space.
- Towards a Medical AI Scientist
Autonomous systems that generate scientific hypotheses, conduct experiments, and draft manuscripts have recently emerged as a promising paradigm for accelerating discovery. However, existing AI Scientists remain largely domain-agnostic, limiting their applicability to clinical medicine, where research is required to be grounded in medical evidence with specialized data modalities. In this work, we introduce Medical AI Scientist, the first autonomous research framework tailored to clinical autonomous research. It enables clinically grounded ideation by transforming extensively surveyed literature into actionable evidence through clinician-engineer co-reasoning mechanism, which improves the traceability of generated research ideas. It further facilitates evidence-grounded manuscript drafting guided by structured medical compositional conventions and ethical policies. The framework operates under 3 research modes, namely paper-based reproduction, literature-inspired innovation, and task-driven exploration, each corresponding to a distinct level of automated scientific inquiry with progressively increasing autonomy. Comprehensive evaluations by both large language models and human experts demonstrate that the ideas generated by the Medical AI Scientist are of substantially higher quality than those produced by commercial LLMs across 171 cases, 19 clinical tasks, and 6 data modalities. Meanwhile, our system achieves strong alignment between the proposed method and its implementation, while also demonstrating significantly higher success rates in executable experiments. Double-blind evaluations by human experts and the Stanford Agentic Reviewer suggest that the generated manuscripts approach MICCAI-level quality, while consistently surpassing those from ISBI and BIBM. The proposed Medical AI Scientist highlights the potential of leveraging AI for autonomous scientific discovery in healthcare.
- Gen-Searcher: Reinforcing Agentic Search for Image Generation
Recent image generation models have shown strong capabilities in generating high-fidelity and photorealistic images. However, they are fundamentally constrained by frozen internal knowledge, thus often failing on real-world scenarios that are knowledge-intensive or require up-to-date information. In this paper, we present Gen-Searcher, as the first attempt to train a search-augmented image generation agent, which performs multi-hop reasoning and search to collect the textual knowledge and reference images needed for grounded generation. To achieve this, we construct a tailored data pipeline and curate two high-quality datasets, Gen-Searcher-SFT-10k and Gen-Searcher-RL-6k, containing diverse search-intensive prompts and corresponding ground-truth synthesis images. We further introduce KnowGen, a comprehensive benchmark that explicitly requires search-grounded external knowledge for image generation and evaluates models from multiple dimensions. Based on these resources, we train Gen-Searcher with SFT followed by agentic reinforcement learning with dual reward feedback, which combines text-based and image-based rewards to provide more stable and informative learning signals for GRPO training. Experiments show that Gen-Searcher brings substantial gains, improving Qwen-Image by around 16 points on KnowGen and 15 points on WISE. We hope this work can serve as an open foundation for search agents in image generation, and we fully open-source our data, models, and code.
- Emergent Social Intelligence Risks in Generative Multi-Agent Systems
Multi-agent systems composed of large generative models are rapidly moving from laboratory prototypes to real-world deployments, where they jointly plan, negotiate, and allocate shared resources to solve complex tasks. While such systems promise unprecedented scalability and autonomy, their collective interaction also gives rise to failure modes that cannot be reduced to individual agents. Understanding these emergent risks is therefore critical. Here, we present a pioneer study of such emergent multi-agent risk in workflows that involve competition over shared resources (e.g., computing resources or market share), sequential handoff collaboration (where downstream agents see only predecessor outputs), collective decision aggregation, and others. Across these settings, we observe that such group behaviors arise frequently across repeated trials and a wide range of interaction conditions, rather than as rare or pathological cases. In particular, phenomena such as collusion-like coordination and conformity emerge with non-trivial frequency under realistic resource constraints, communication protocols, and role assignments, mirroring well-known pathologies in human societies despite no explicit instruction. Moreover, these risks cannot be prevented by existing agent-level safeguards alone. These findings expose the dark side of intelligent multi-agent systems: a social intelligence risk where agent collectives, despite no instruction to do so, spontaneously reproduce familiar failure patterns from human societies.
- EpochX: Building the Infrastructure for an Emergent Agent Civilization
General-purpose technologies reshape economies less by improving individual tools than by enabling new ways to organize production and coordination. We believe AI agents are approaching a similar inflection point: as foundation models make broad task execution and tool use increasingly accessible, the binding constraint shifts from raw capability to how work is delegated, verified, and rewarded at scale. We introduce EpochX, a credits-native marketplace infrastructure for human-agent production networks. EpochX treats humans and agents as peer participants who can post tasks or claim them. Claimed tasks can be decomposed into subtasks and executed through an explicit delivery workflow with verification and acceptance. Crucially, EpochX is designed so that each completed transaction can produce reusable ecosystem assets, including skills, workflows, execution traces, and distilled experience. These assets are stored with explicit dependency structure, enabling retrieval, composition, and cumulative improvement over time. EpochX also introduces a native credit mechanism to make participation economically viable under real compute costs. Credits lock task bounties, budget delegation, settle rewards upon acceptance, and compensate creators when verified assets are reused. By formalizing the end-to-end transaction model together with its asset and incentive layers, EpochX reframes agentic AI as an organizational design problem: building infrastructures where verifiable work leaves persistent, reusable artifacts, and where value flows support durable human-agent collaboration.
- On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models
Multimodal Continual Instruction Tuning aims to continually enhance Large Vision Language Models (LVLMs) by learning from new data without forgetting previously acquired knowledge. Mixture of Experts (MoE) architectures naturally facilitate this by incrementally adding new experts and expanding routers while keeping the existing ones frozen. However, despite expert isolation, MoE-based continual learners still suffer from forgetting due to routing-drift: old-task tokens become mistakenly attracted to newly added experts, degrading performance on prior tasks. We analyze the failure mode at the token level and reveal the token's dilemma: ambiguous and old tokens in new-task data offer minimal learning benefit yet induce forgetting when routed to new experts, due to their ambiguous routing assignment during training. Motivated by this, we propose LLaVA-DyMoE, a dynamic MoE framework that incrementally expands the MoE with drift-aware token assignment. We characterize token types via their routing score distributions and apply targeted regularization. Specifically, a token-level assignment guidance steers ambiguous and old tokens away from new experts to preserve established routing patterns and alleviate routing-drift, while complementary routing score regularizations enforce expert-group separation and promote new-expert specialization. Extensive experiments demonstrate that our LLaVA-DyMoE effectively mitigates routing-drift-induced forgetting, achieving over a 7% gain in mean final accuracy and a 12% reduction in forgetting compared to baselines. The project page is https://zhaoc5.github.io/DyMoE.
Techmeme(42)
- OpenAI introduces a Codex plugin for Claude Code, letting users invoke Codex from inside Claude Code to review code or delegate tasks (Vaibhav (VB) Srivastav/@reach_vb)
Vaibhav (VB) Srivastav / @reach_vb : OpenAI introduces a Codex plugin for Claude Code, letting users invoke Codex from inside Claude Code to review code or delegate tasks — If you already use Claude Code, this Codex plugin gives you a simple way to pull Codex into the same workflow. It is useful for three things …
- Leaked January presentation: Coatue estimated that Anthropic would lose $14B in EBITDA on $18B in revenue in 2026 and reach a $1.995T valuation in 2030 (Eric Newcomer/Newcomer)
Eric Newcomer / Newcomer : Leaked January presentation: Coatue estimated that Anthropic would lose $14B in EBITDA on $18B in revenue in 2026 and reach a $1.995T valuation in 2030 — We talk about it & more on the Cerebral Valley Show — In a presentation to prospective investors in January, Coatue offered a rare look …
- Alibaba releases its Qwen3.5-Omni omnimodal LLM with support for 10+ hours of audio input, saying the Plus variant surpasses Gemini 3.1 Pro on audio benchmarks (Qwen)
Qwen : Alibaba releases its Qwen3.5-Omni omnimodal LLM with support for 10+ hours of audio input, saying the Plus variant surpasses Gemini 3.1 Pro on audio benchmarks — Qwen3.5-Omni is Qwen's latest generation of fully omnimodal LLM, supporting the understanding of text, images, audio, and audio-visual content.
- Levels.fyi: median base-salary offers for US software engineers at VC-backed startups have risen 25% to $200K since 2022; total compensation has risen just 18% (Katherine Bindley/Wall Street Journal)
Katherine Bindley / Wall Street Journal : Levels.fyi: median base-salary offers for US software engineers at VC-backed startups have risen 25% to $200K since 2022; total compensation has risen just 18% — Young tech companies once might have complemented lower salaries with generous equity packages. Now they're upping base pay.
- Gurman: Apple pulls Apple Intelligence in China, after accidentally launching it in the country; there is no imminent launch as Apple has no regulatory approval (Ryan Christoffel/9to5Mac)
Ryan Christoffel / 9to5Mac : Gurman: Apple pulls Apple Intelligence in China, after accidentally launching it in the country; there is no imminent launch as Apple has no regulatory approval — Apple Intelligence first launched in the US in October 2024, but now after a nearly 18-month wait, Apple's AI features appear to be rolling out in China too.
- Sources: US prosecutors are exploring whether some prediction market bets, including on the capture of Nicolás Maduro, violated insider trading and other laws (Kara Scannell/CNN)
Kara Scannell / CNN : Sources: US prosecutors are exploring whether some prediction market bets, including on the capture of Nicolás Maduro, violated insider trading and other laws — Federal prosecutors in Manhattan are exploring whether certain lucrative bets placed on prediction markets …
- An excerpt from the book The Infinity Machine details how DeepMind's early governance battles with Google changed Demis Hassabis from an idealist into a realist (Sebastian Mallaby/Colossus)
Sebastian Mallaby / Colossus : An excerpt from the book The Infinity Machine details how DeepMind's early governance battles with Google changed Demis Hassabis from an idealist into a realist — The inside story of how DeepMind's experiments in AI safety governance transformed Demis Hassabis from an idealist into a realist
- Samsung launches Hearapy, a free Android app to mitigate motion sickness by playing a 100Hz sine wave tone; a 60-second session can provide two hours of relief (Andrew Liszewski/The Verge)
Andrew Liszewski / The Verge : Samsung launches Hearapy, a free Android app to mitigate motion sickness by playing a 100Hz sine wave tone; a 60-second session can provide two hours of relief — Listening to a 100Hz sine wave tone for just 60 seconds could reduce motion sickness symptoms for up to two hours.
- Austin-based Saronic, which builds military autonomous ships, raised a $1.75B Series D led by Kleiner Perkins at a $9.25B valuation, up from $4B in Feb. 2025 (Samantha Subin/CNBC)
Samantha Subin / CNBC : Austin-based Saronic, which builds military autonomous ships, raised a $1.75B Series D led by Kleiner Perkins at a $9.25B valuation, up from $4B in Feb. 2025 — Autonomous ship startup Saronic said Tuesday that it's raised $1.75 billion as it ramps up production to meet mounting U.S. military demand …
- Sequoia says Doug Leone is returning in a newly created role of chairman, after he announced his retirement in 2022 from his role as "senior steward" (Iain Martin/Forbes)
Iain Martin / Forbes : Sequoia says Doug Leone is returning in a newly created role of chairman, after he announced his retirement in 2022 from his role as “senior steward” — Three years after his official retirement, the Midas List investor is back in a new supervisory role at the blue chip Silicon Valley fund.
- Anthropic confirms it leaked parts of Claude Code's source code, saying the leak was "a release packaging issue caused by human error, not a security breach" (Ashley Capoot/CNBC)
Ashley Capoot / CNBC : Anthropic confirms it leaked parts of Claude Code's source code, saying the leak was “a release packaging issue caused by human error, not a security breach” — Anthropic leaked part of the internal source code for its popular artificial intelligence coding assistant, Claude Code, the company confirmed on Tuesday.
- Snap shares climbed 14% on Tuesday after activist investor Irenic suggested changes to boost the stock's value 7x, such as cutting staff by 21% and ending Specs (Lola Murti/CNBC)
Lola Murti / CNBC : Snap shares climbed 14% on Tuesday after activist investor Irenic suggested changes to boost the stock's value 7x, such as cutting staff by 21% and ending Specs — Shares of Snap climbed 14% Tuesday after shareholder Irenic Capital Management sent a letter to CEO Evan Spiegel outlining changes …
Solidot(35)
- 微软 Copilot 在修改 PR 中的拼写错误时添加了广告
开发者发现,使用微软 AI 助手 Copilot 修改 PR 中的拼写错误时它主动添加了一则广告。对 GitHub 平台的搜索发现,已经有数以万计的 PR 包含了相同的广告——“Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast”。开发者认为微软此举无法让人接受。
- 木星闪电释放的能量相当于原子弹爆炸
科学家使用 NASA 朱诺号探测器仪器测量了木星闪电,发现其释放的能量是地球上闪电的 1 百到 1 万倍。地球闪电一次释放的能量约为 10 亿焦耳,这意味着木星最强闪电释放大约 10 万亿焦耳的能量,相当于 2400 吨 TNT 炸药,或广岛原子弹威力的六分之一。根据朱诺号对木星风暴中闪电发生频率的研究,它平均每秒出现三次闪电,这意味着风暴每分钟释放的能量相当于多颗原子弹爆炸。闪电被认为促进了地球生命的演化,木星上的闪电也可能促进复杂的化学反应过程。
- 蜜蜂和蜂鸟在工作期间吸入了微量的酒
蜜蜂和蜂鸟都会饮酒,它们的食物——花蜜——都含有微量的酒精。加州伯克利的研究人员发现含有乙醇的花蜜相当普遍。在他们分析的 29 种植物花蜜样本中有 26 种发现了乙醇。大多数样本的乙醇浓度极低,但有一个样本的乙醇浓度达到了 0.056%——大约相当于 0.1 度酒精,勉强可算作酒了。虽然听起来微不足道,但相比授粉昆虫的体重,它们每天摄入的酒精并不少。一只安氏蜂鸟(anna's hummingbird)每天饮入相当于自身体重 0.5 到 1.5 倍的花蜜,根据该摄入量,研究人员估计,蜂鸟每天每公斤体重大约摄入 0.2 克乙醇。由于它们一直在花中穿梭,因此摄入的酒精会被迅速代谢掉,所以不太可能醉酒。实验室测试显示,蜂鸟乐意饮用酒精含量在 1% 左右的花蜜,但当酒精浓度升高时它们会开始避开,到 2% 左右时访问花朵的次数会急剧下降。它们也知道适度饮酒。
- 杜比诉 Snapchat 挑战 AV1 的免专利费声明
成员包括亚马逊、苹果、Google、微软、Mozilla 和 Netflix 的 AOMedia 联盟开发了免专利费的开放编解码器 AOMedia Video 1(AV1)。但杜比公司(Dolby Laboratories)对 Snap 公司提起的专利侵权诉讼对 AV1 的免专利费声明提出了质疑。杜比在诉讼书中称,AV1 利用它已经申请专利的技术,该公司未同意在免费且不收取专利税的条件下授权使用这些技术。杜比称 AOMedia]并不拥有实现 AV1 编解码器使用的所有专利,AV1 整合了存在于 HEVC 中的技术。相关技术受到了现有第三方专利权和许可义务的约束。
- AI 和机器人流量超过人类
根据 Human Security 发布的《The State of AI Traffic》报告,AI 和机器人流量正式超过了人类。报告称在 2025 年包括 AI 在内的自动化流量增长速度几乎是人类活动的八倍。OpenAI 的 ChatGPT、Anthropic 的 Claude 和 Google 的 Gemini 等大模型的流行推动了 AI 流量的增长,2025 年 AI 流量增长了 187%。Cloudflare CEO Matthew Prince 此前在 SXSW 会议上表示,在生成式 AI 时代之前,互联网流量中约有 20% 来自机器人,主要由 Google的 Web 爬虫驱动。
- DNA 告诉了我们什么,它又有什么局限
在“金州杀手(Golden State Killer)”变成陈年悬案四十余年后的 2018 年,一位对家族史感兴趣的女性向一家家谱公司寄去唾沫进行测序。她的 DNA 成为破案的关键。凶手是其远房亲戚,调查人员最终抓住了前警官 Joseph James DeAngelo Jr.,他在 2020 年承认了 13 项谋杀罪和 13 项绑架罪。有数以百万的人将其 DNA 样本寄送给 23andMe 和 AncestryDNA 等测序公司,以了解自己的祖先、发现健康风险或寻找失散的亲人。但 DNA 揭示的真相可能会颠覆我们对家庭和身份的理解:你可能会发现父母不是自己的生物学父母,或者兄弟姐妹之一可能不是亲兄弟姐妹。DNA 也揭示我们彼此之间比以前认为的更紧密:所有人类的最近共同祖先生活在几千年前。我们彼此之间都有血缘关系。美国人一直以隐私理由反对建立国家 DNA 数据库,但志愿性质的消费者基因检测已经创造了类似的国家数据库,由于共享 DNA,只需 1% 的测序就能让所有人人都搜索到,而美国有 7% 的人做过了测序。科学家也发现 DNA 揭示的信息仍然有限,糖尿病患病风险是 25% 还是 20% 并没有多大区别,并不意味着你是糖尿病高危人群,所以在利用基因筛选胚胎时将糖尿病患病风险从 35% 降至 30% 意义并不大。
- 微软停止通过 Copilot 在 Pull Request 中插入广告
在引发广泛关注之后,GitHub Copilot 首席产品经理 Tim Rogers 通过 HN 宣布他们已经禁用了 Copilot 在 Pull Request 中插入广告的行为。Copilot 团队没有把在 PR 中插入“Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast”之类的文字是广告,而是视为“实用技巧(tips)”,“我们一直在 Copilot 编程智能体创建的 PR 中加入产品实用技巧。目的是帮助开发者学习如何在工作流程中更好的使用智能体。但听取反馈并反思后,我们认为这是一个错误的决定。我们以后不会再这样做了。”广告文字涉及的 Raycast 公司表示他们对此毫不知情。
- Google 开始推行 Android 开发者身份验证
Google 受争议的开发者身份验证正式启动。从今年 9 月起 Google 将强制性要求验证所有 Android 应用开发者的身份,未经身份验证的开发者的应用将无法在 Android 设备上安装(sideload)。Google 官方博客声称它是出于安全理由要求验证开发者身份,理由是其分析显示来自第三方源的恶意应用数量是 Google Play 的 90 多倍。开发者身份验证通过 Android Developer Console 和 Play Console 推出;如果应用开发者只在 Google Play 外发布应用那么他们需要通过 Android Developer Console 创建一个账号。
- 空气污染预警减少了过早死亡
根据发表于 PNAS Nexus 上的一项研究,研究团队分析了中国北方 57 座城市连续五年的数据,以评估空气污染预警的实际效果。短期暴露于 PM2.5(细颗粒物)已被充分证实会增加心血管和呼吸系统死亡风险。研究显示,由于空气预警导致的 PM2.5 的减少在五年间共避免了近 5.4 万例过早死亡——相当于与污染事件相关、由 PM2.5 导致的过早死亡减少了约 11%。 河南、河北和山东等地区受益最大。这些地区通常具有重工业密集、煤炭消费量高的特征。在预警生效期间,PM2.5 引发的急性死亡风险估计下降了 30%–40%。空气污染预警会触发一系列短期措施,例如工厂临时停工、交通管制、禁止易扬尘建筑作业以及向公众发布健康警示。研究人员在57座城市中发现,在预警期间: PM2.5(细颗粒物)下降了 20%–40%;PM10(可吸入颗粒物)下降了 33%;NO2(二氧化氮)下降了 5%–25%。
- 儿童青少年屏幕使用时间过去三十年显著增加
芬兰研究人员报告,过去三十年(1991-2022 年间)儿童和青少年的屏幕使用时间显著增加,新冠疫情后更为明显。以前的屏幕主要是传统电视,但随着时间的迁移屏幕逐渐向 PC、手机和游戏等更个性化的数字设备转变,电视的观看时间则在逐渐下降。研究发现,新冠疫情期间采取了社交封锁政策,儿童青少年依赖电子屏幕进行学习、社交和娱乐,导致屏幕使用时间急剧增加。研究表明,男孩和女孩的屏幕使用时间均有所增加,但男孩往往花更多时间玩游戏。年龄较小的儿童通常比年龄较大的儿童屏幕使用时间更少,来自高收入家庭的儿童屏幕使用时间通常也较少。
- 欧洲国家快速拥抱绿色技术和电动汽车
因霍尔木兹海峡封锁推高世界各地的油气价格,欧洲多国转向了绿色技术购买了更多电动汽车。数据显示,3 月前三周英国热泵销量较上月同期增长 51%,太阳能销量增长 54%,电动汽车充电器销量增长 20%。法国二手车在线零售商 Aramisauto 的电动汽车销量在 2 月中旬到 3 月 9 日期间几乎翻了一番。阿姆斯特丹二手车交易平台 Olx 表示它在法国、罗马尼亚、葡萄牙和波兰的平台上电动汽车的客户咨询量激增。挪威最大二手车交易平台 Finn.no 上电动汽车销量超过了柴油车。
- 百度多辆无人驾驶出租车同时发生故障
百度旗下的萝卜快跑在武汉运营无人驾驶出租车服务,本周二 3 月 31 日 20 时左右无人出租车在路上集体趴窝。根据社交媒体平台上广泛流传的现场照片和视频,故障的萝卜快跑无人出租车不仅有停在路边的,还有停在马路中间的,甚至还有停在高架路的,部分乘客被困车内逾一小时。武汉交警称,初步判断为系统故障所致。交警表示这起事故中无人受伤,所有乘客均安全下车。目前不清楚有多少百度无人驾驶出租车受到影响。社交网络的照片和视频显示,突然停车的无人出租车至少造成多起追尾事故。有武汉网民称看到至少十几辆无人出租车趴窝。百度尚未对事故进行说明。