WEEK · 2026-W16

Weekly Digest — 2026-W16

222 unique stories (2026-04-132026-04-19), aggregated across 8 sources.

Hacker News(42)

  1. GitHub Stacked PRs (github.github.com)
  2. Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them (anchor.host)
  3. The Future of Everything Is Lies, I Guess: Safety (aphyr.com)
  4. Building a CLI for All of Cloudflare (blog.cloudflare.com)
  5. Nothing Ever Happens: Polymarket bot that always buys No on non-sports markets (github.com)
  6. Make tmux pretty and usable (2024) (hamvocke.com)
  7. Tell HN: Fiverr left customer files public and searchable
  8. YouTube now world's largest media company, topping Disney (www.hollywoodreporter.com)
  9. OpenSSL 4.0.0 (github.com)
  10. I wrote to Flock's privacy contact to opt out of their domestic spying program (honeypot.net)
  11. Claude Code Routines (code.claude.com)
  12. Spain to expand internet blocks to tennis, golf, movies broadcasting times (bandaancha.eu)

GitHub Trending(26)

  1. forrestchang / andrej-karpathy-skills
  2. NousResearch / hermes-agent
  3. shiyu-coder / Kronos
  4. thedotmack / claude-mem
  5. microsoft / markitdown
  6. multica-ai / multica
  7. jamiepine / voicebox
  8. pascalorg / editor
  9. obra / superpowers
  10. Lordog / dive-into-llms
  11. virattt / ai-hedge-fund
  12. chrislgarry / Apollo-11

Product Hunt(41)

  1. REasy

    The operating system for African importers

  2. Vekta

    AI training and coaching platform for endurance sports

  3. Legitify

    Digital notarization across 50+ jurisdictions

  4. Luma Agents

    Agents that plan, iterate, + refine w/ full creative context

  5. GhostDesk

    Your invisible AI co-pilot for interviews & meetings

  6. Ably Chat

    The Chat API built for serious scale

  7. ElevenAgents Guardrails 2.0

    Configurable safety control for enterprise agent deployment.

  8. Caveman

    Why use so many token when few do trick?

  9. Open Agents

    Agents that ship real code

  10. Hapax

    Watches your workflows. Builds your Agents. Automatically.

  11. Softr AI Co-Builder

    Build business apps with AI - that actually work

  12. CatDoes v4

    An AI agent with its own computer builds your apps

Hugging Face(31)

  1. WildDet3D: Scaling Promptable 3D Detection in the Wild

    Understanding objects in 3D from a single image is a cornerstone of spatial intelligence. A key step toward this goal is monocular 3D object detection--recovering the extent, location, and orientation of objects from an input RGB image. To be practical in the open world, such a detector must generalize beyond closed-set categories, support diverse prompt modalities, and leverage geometric cues when available. Progress is hampered by two bottlenecks: existing methods are designed for a single prompt type and lack a mechanism to incorporate additional geometric cues, and current 3D datasets cover only narrow categories in controlled environments, limiting open-world transfer. In this work we address both gaps. First, we introduce WildDet3D, a unified geometry-aware architecture that natively accepts text, point, and box prompts and can incorporate auxiliary depth signals at inference time. Second, we present WildDet3D-Data, the largest open 3D detection dataset to date, constructed by generating candidate 3D boxes from existing 2D annotations and retaining only human-verified ones, yielding over 1M images across 13.5K categories in diverse real-world scenes. WildDet3D establishes a new state-of-the-art across multiple benchmarks and settings. In the open-world setting, it achieves 22.6/24.8 AP3D on our newly introduced WildDet3D-Bench with text and box prompts. On Omni3D, it reaches 34.2/36.4 AP3D with text and box prompts, respectively. In zero-shot evaluation, it achieves 40.3/48.9 ODS on Argoverse 2 and ScanNet. Notably, incorporating depth cues at inference time yields substantial additional gains (+20.7 AP on average across settings).

  2. FORGE:Fine-grained Multimodal Evaluation for Manufacturing Scenarios

    The manufacturing sector is increasingly adopting Multimodal Large Language Models (MLLMs) to transition from simple perception to autonomous execution, yet current evaluations fail to reflect the rigorous demands of real-world manufacturing environments. Progress is hindered by data scarcity and a lack of fine-grained domain semantics in existing datasets. To bridge this gap, we introduce FORGE. Wefirst construct a high-quality multimodal dataset that combines real-world 2D images and 3D point clouds, annotated with fine-grained domain semantics (e.g., exact model numbers). We then evaluate 18 state-of-the-art MLLMs across three manufacturing tasks, namely workpiece verification, structural surface inspection, and assembly verification, revealing significant performance gaps. Counter to conventional understanding, the bottleneck analysis shows that visual grounding is not the primary limiting factor. Instead, insufficient domain-specific knowledge is the key bottleneck, setting a clear direction for future research. Beyond evaluation, we show that our structured annotations can serve as an actionable training resource: supervised fine-tuning of a compact 3B-parameter model on our data yields up to 90.8% relative improvement in accuracy on held-out manufacturing scenarios, providing preliminary evidence for a practical pathway toward domain-adapted manufacturing MLLMs. The code and datasets are available at https://ai4manufacturing.github.io/forge-web.

  3. RefineAnything: Multimodal Region-Specific Refinement for Perfect Local Details

    We introduce region-specific image refinement as a dedicated problem setting: given an input image and a user-specified region (e.g., a scribble mask or a bounding box), the goal is to restore fine-grained details while keeping all non-edited pixels strictly unchanged. Despite rapid progress in image generation, modern models still frequently suffer from local detail collapse (e.g., distorted text, logos, and thin structures). Existing instruction-driven editing models emphasize coarse-grained semantic edits and often either overlook subtle local defects or inadvertently change the background, especially when the region of interest occupies only a small portion of a fixed-resolution input. We present RefineAnything, a multimodal diffusion-based refinement model that supports both reference-based and reference-free refinement. Building on a counter-intuitive observation that crop-and-resize can substantially improve local reconstruction under a fixed VAE input resolution, we propose Focus-and-Refine, a region-focused refinement-and-paste-back strategy that improves refinement effectiveness and efficiency by reallocating the resolution budget to the target region, while a blended-mask paste-back guarantees strict background preservation. We further introduce a boundary-aware Boundary Consistency Loss to reduce seam artifacts and improve paste-back naturalness. To support this new setting, we construct Refine-30K (20K reference-based and 10K reference-free samples) and introduce RefineEval, a benchmark that evaluates both edited-region fidelity and background consistency. On RefineEval, RefineAnything achieves strong improvements over competitive baselines and near-perfect background preservation, establishing a practical solution for high-precision local refinement. Project Page: https://limuloo.github.io/RefineAnything/.

  4. EXAONE 4.5 Technical Report

    This technical report introduces EXAONE 4.5, the first open-weight vision language model released by LG AI Research. EXAONE 4.5 is architected by integrating a dedicated visual encoder into the existing EXAONE 4.0 framework, enabling native multimodal pretraining over both visual and textual modalities. The model is trained on large-scale data with careful curation, particularly emphasizing document-centric corpora that align with LG's strategic application domains. This targeted data design enables substantial performance gains in document understanding and related tasks, while also delivering broad improvements across general language capabilities. EXAONE 4.5 extends context length up to 256K tokens, facilitating long-context reasoning and enterprise-scale use cases. Comparative evaluations demonstrate that EXAONE 4.5 achieves competitive performance in general benchmarks while outperforming state-of-the-art models of similar scale in document understanding and Korean contextual reasoning. As part of LG's ongoing effort toward practical industrial deployment, EXAONE 4.5 is designed to be continuously extended with additional domains and application scenarios to advance AI for a better life.

  5. Matrix-Game 3.0: Real-Time and Streaming Interactive World Model with Long-Horizon Memory

    With the advancement of interactive video generation, diffusion models have increasingly demonstrated their potential as world models. However, existing approaches still struggle to simultaneously achieve memory-enabled long-term temporal consistency and high-resolution real-time generation, limiting their applicability in real-world scenarios. To address this, we present Matrix-Game 3.0, a memory-augmented interactive world model designed for 720p real-time longform video generation. Building upon Matrix-Game 2.0, we introduce systematic improvements across data, model, and inference. First, we develop an upgraded industrial-scale infinite data engine that integrates Unreal Engine-based synthetic data, large-scale automated collection from AAA games, and real-world video augmentation to produce high-quality Video-Pose-Action-Prompt quadruplet data at scale. Second, we propose a training framework for long-horizon consistency: by modeling prediction residuals and re-injecting imperfect generated frames during training, the base model learns self-correction; meanwhile, camera-aware memory retrieval and injection enable the base model to achieve long horizon spatiotemporal consistency. Third, we design a multi-segment autoregressive distillation strategy based on Distribution Matching Distillation (DMD), combined with model quantization and VAE decoder pruning, to achieve efficient real-time inference. Experimental results show that Matrix-Game 3.0 achieves up to 40 FPS real-time generation at 720p resolution with a 5B model, while maintaining stable memory consistency over minute-long sequences. Scaling up to a 2x14B model further improves generation quality, dynamics, and generalization. Our approach provides a practical pathway toward industrial-scale deployable world models.

  6. ECHO: Efficient Chest X-ray Report Generation with One-step Block Diffusion

    Chest X-ray report generation (CXR-RG) has the potential to substantially alleviate radiologists' workload. However, conventional autoregressive vision--language models (VLMs) suffer from high inference latency due to sequential token decoding. Diffusion-based models offer a promising alternative through parallel generation, but they still require multiple denoising iterations. Compressing multi-step denoising to a single step could further reduce latency, but often degrades textual coherence due to the mean-field bias introduced by token-factorized denoisers. To address this challenge, we propose ECHO, an efficient diffusion-based VLM (dVLM) for chest X-ray report generation. ECHO enables stable one-step-per-block inference via a novel Direct Conditional Distillation (DCD) framework, which mitigates the mean-field limitation by constructing unfactorized supervision from on-policy diffusion trajectories to encode joint token dependencies. In addition, we introduce a Response-Asymmetric Diffusion (RAD) training strategy that further improves training efficiency while maintaining model effectiveness. Extensive experiments demonstrate that ECHO surpasses state-of-the-art autoregressive methods, improving RaTE and SemScore by 64.33\% and 60.58\% respectively, while achieving an 8times inference speedup without compromising clinical accuracy.

  7. QuanBench+: A Unified Multi-Framework Benchmark for LLM-Based Quantum Code Generation

    Large Language Models (LLMs) are increasingly used for code generation, yet quantum code generation is still evaluated mostly within single frameworks, making it difficult to separate quantum reasoning from framework familiarity. We introduce QuanBench+, a unified benchmark spanning Qiskit, PennyLane, and Cirq, with 42 aligned tasks covering quantum algorithms, gate decomposition, and state preparation. We evaluate models with executable functional tests, report Pass@1 and Pass@5, and use KL-divergence-based acceptance for probabilistic outputs. We additionally study Pass@1 after feedback-based repair, where a model may revise code after a runtime error or wrong answer. Across frameworks, the strongest one-shot scores reach 59.5% in Qiskit, 54.8% in Cirq, and 42.9% in PennyLane; with feedback-based repair, the best scores rise to 83.3%, 76.2%, and 66.7%, respectively. These results show clear progress, but also that reliable multi-framework quantum code generation remains unsolved and still depends strongly on framework-specific knowledge.

  8. The Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping

    Despite the success of reinforcement learning for large language models, a common failure mode is reduced sampling diversity, where the policy repeatedly generates similar erroneous behaviors. Classical entropy regularization encourages randomness under the current policy, but does not explicitly discourage recurrent failure patterns across rollouts. We propose MEDS, a Memory-Enhanced Dynamic reward Shaping framework that incorporates historical behavioral signals into reward design. By storing and leveraging intermediate model representations, we capture features of past rollouts and use density-based clustering to identify frequently recurring error patterns. Rollouts assigned to more prevalent error clusters are penalized more heavily, encouraging broader exploration while reducing repeated mistakes. Across five datasets and three base models, MEDS consistently improves average performance over existing baselines, achieving gains of up to 4.13 pass@1 points and 4.37 pass@128 points. Additional analyses using both LLM-based annotations and quantitative diversity metrics show that MEDS increases behavioral diversity during sampling.

  9. Attention Sink in Transformers: A Survey on Utilization, Interpretation, and Mitigation

    As the foundational architecture of modern machine learning, Transformers have driven remarkable progress across diverse AI domains. Despite their transformative impact, a persistent challenge across various Transformers is Attention Sink (AS), in which a disproportionate amount of attention is focused on a small subset of specific yet uninformative tokens. AS complicates interpretability, significantly affecting the training and inference dynamics, and exacerbates issues such as hallucinations. In recent years, substantial research has been dedicated to understanding and harnessing AS. However, a comprehensive survey that systematically consolidates AS-related research and offers guidance for future advancements remains lacking. To address this gap, we present the first survey on AS, structured around three key dimensions that define the current research landscape: Fundamental Utilization, Mechanistic Interpretation, and Strategic Mitigation. Our work provides a pivotal contribution by clarifying key concepts and guiding researchers through the evolution and trends of the field. We envision this survey as a definitive resource, empowering researchers and practitioners to effectively manage AS within the current Transformer paradigm, while simultaneously inspiring innovative advancements for the next generation of Transformers. The paper list of this work is available at https://github.com/ZunhaiSu/Awesome-Attention-Sink.

  10. OmniShow: Unifying Multimodal Conditions for Human-Object Interaction Video Generation

    In this work, we study Human-Object Interaction Video Generation (HOIVG), which aims to synthesize high-quality human-object interaction videos conditioned on text, reference images, audio, and pose. This task holds significant practical value for automating content creation in real-world applications, such as e-commerce demonstrations, short video production, and interactive entertainment. However, existing approaches fail to accommodate all these requisite conditions. We present OmniShow, an end-to-end framework tailored for this practical yet challenging task, capable of harmonizing multimodal conditions and delivering industry-grade performance. To overcome the trade-off between controllability and quality, we introduce Unified Channel-wise Conditioning for efficient image and pose injection, and Gated Local-Context Attention to ensure precise audio-visual synchronization. To effectively address data scarcity, we develop a Decoupled-Then-Joint Training strategy that leverages a multi-stage training process with model merging to efficiently harness heterogeneous sub-task datasets. Furthermore, to fill the evaluation gap in this field, we establish HOIVG-Bench, a dedicated and comprehensive benchmark for HOIVG. Extensive experiments demonstrate that OmniShow achieves overall state-of-the-art performance across various multimodal conditioning settings, setting a solid standard for the emerging HOIVG task.

  11. Strips as Tokens: Artist Mesh Generation with Native UV Segmentation

    Recent advancements in autoregressive transformers have demonstrated remarkable potential for generating artist-quality meshes. However, the token ordering strategies employed by existing methods typically fail to meet professional artist standards, where coordinate-based sorting yields inefficiently long sequences, and patch-based heuristics disrupt the continuous edge flow and structural regularity essential for high-quality modeling. To address these limitations, we propose Strips as Tokens (SATO), a novel framework with a token ordering strategy inspired by triangle strips. By constructing the sequence as a connected chain of faces that explicitly encodes UV boundaries, our method naturally preserves the organized edge flow and semantic layout characteristic of artist-created meshes. A key advantage of this formulation is its unified representation, enabling the same token sequence to be decoded into either a triangle or quadrilateral mesh. This flexibility facilitates joint training on both data types: large-scale triangle data provides fundamental structural priors, while high-quality quad data enhances the geometric regularity of the outputs. Extensive experiments demonstrate that SATO consistently outperforms prior methods in terms of geometric quality, structural coherence, and UV segmentation.

  12. Uni-ViGU: Towards Unified Video Generation and Understanding via A Diffusion-Based Video Generator

    Unified multimodal models integrating visual understanding and generation face a fundamental challenge: visual generation incurs substantially higher computational costs than understanding, particularly for video. This imbalance motivates us to invert the conventional paradigm: rather than extending understanding-centric MLLMs to support generation, we propose Uni-ViGU, a framework that unifies video generation and understanding by extending a video generator as the foundation. We introduce a unified flow method that performs continuous flow matching for video and discrete flow matching for text within a single process, enabling coherent multimodal generation. We further propose a modality-driven MoE-based framework that augments Transformer blocks with lightweight layers for text generation while preserving generative priors. To repurpose generation knowledge for understanding, we design a bidirectional training mechanism with two stages: Knowledge Recall reconstructs input prompts to leverage learned text-video correspondences, while Capability Refinement fine-tunes on detailed captions to establish discriminative shared representations. Experiments demonstrate that Uni-ViGU achieves competitive performance on both video generation and understanding, validating generation-centric architectures as a scalable path toward unified multimodal intelligence. Project Page and Code: https://fr0zencrane.github.io/uni-vigu-page/.

Techmeme(42)

  1. Roblox says developers will need Roblox Plus, a new $4.99-per-month subscription offering benefits like discounts, to publish games for Kids and Select accounts (Aisha Malik/TechCrunch)

    Aisha Malik / TechCrunch : Roblox says developers will need Roblox Plus, a new $4.99-per-month subscription offering benefits like discounts, to publish games for Kids and Select accounts —  Roblox is introducing new account types designed to give kids and younger teens age-appropriate access to chat and games, the company announced on Monday.

  2. Microsoft raises prices for Surface PCs, with Laptop 7 and Pro 11 now $500 more expensive than at their 2024 launch, citing higher memory and component costs (Zac Bowden/Windows Central)

    Zac Bowden / Windows Central : Microsoft raises prices for Surface PCs, with Laptop 7 and Pro 11 now $500 more expensive than at their 2024 launch, citing higher memory and component costs —  Microsoft is raising prices on all its current Surface PC offerings, with the midrange devices now starting at above $1,000, and flagships starting at $1,500.

  3. Filing: Anthropic hired Ballard Partners, a lobbying firm with strong ties to Trump administration, days after DOD designated the company a supply chain risk (Bloomberg)

    Bloomberg : Filing: Anthropic hired Ballard Partners, a lobbying firm with strong ties to Trump administration, days after DOD designated the company a supply chain risk —  Anthropic PBC hired the lobbying firm Ballard Partners as it draws out its fight with the Pentagon, a new public document shows.

  4. Anthropic says its $20M donation to Public First Action can't be "used to influence federal elections" and is to educate the public on AI policy (Veronica Irwin/Transformer)

    Veronica Irwin / Transformer : Anthropic says its $20M donation to Public First Action can't be “used to influence federal elections” and is to educate the public on AI policy —  The company's money isn't allowed to be used in the midterm battles.  Without it, pro-safety candidates may be even more outgunned than expected

  5. Internal memo: Microsoft's gaming chief Asha Sharma says "Game Pass has become too expensive for players" and that Microsoft needs "a better value equation" (Tom Warren/The Verge)

    Tom Warren / The Verge : Internal memo: Microsoft's gaming chief Asha Sharma says “Game Pass has become too expensive for players” and that Microsoft needs “a better value equation” —  Microsoft is getting ready to address Game Pass pricing concerns.

  6. Amazon Leo unveils the Aviation Antenna, saying it can deliver up to 1 Gbps download and 400 Mbps upload speeds for in-flight Wi-Fi (Michael Kan/PCMag)

    Michael Kan / PCMag : Amazon Leo unveils the Aviation Antenna, saying it can deliver up to 1 Gbps download and 400 Mbps upload speeds for in-flight Wi-Fi —  Amazon Leo is trying to steal some of the spotlight from Starlink's in-flight Wi-Fi business by showing off its own satellite internet antenna for commercial jets.

  7. Meta and Broadcom announce an expanded partnership to co-develop multiple generations of Meta's MTIA chips; Broadcom CEO Hock Tan plans to leave Meta's board (CNBC)

    CNBC : Meta and Broadcom announce an expanded partnership to co-develop multiple generations of Meta's MTIA chips; Broadcom CEO Hock Tan plans to leave Meta's board —  Meta and Broadcom on Tuesday announced a sweeping deal that extends an existing partnership between the two companies for the design …

  8. The FCC grants Netgear a conditional approval to import its future consumer routers, cable modems, and cable gateways into the US through October 1, 2027 (Sean Hollister/The Verge)

    Sean Hollister / The Verge : The FCC grants Netgear a conditional approval to import its future consumer routers, cable modems, and cable gateways into the US through October 1, 2027 —  Make it make sense. Make it make sense. … The United States' foreign router ban didn't make a whole lot of sense, and today may not change that.

  9. US-based Credo, which specializes in data center connectivity, agrees to acquire Israeli chip company DustPhotonics in a cash-and-stock deal worth up to $1.3B (CTech)

    CTech : US-based Credo, which specializes in data center connectivity, agrees to acquire Israeli chip company DustPhotonics in a cash-and-stock deal worth up to $1.3B —  The Israeli company's photonic chip technology enables faster, lower-cost data transfer in next-generation AI clusters.

  10. AWS launches Amazon Bio Discovery, a new AI-powered application designed to speed up drug development, giving scientists access to biological foundation models (Reuters)

    Reuters : AWS launches Amazon Bio Discovery, a new AI-powered application designed to speed up drug development, giving scientists access to biological foundation models —  Amazon's (AMZN.O) cloud unit on Tuesday launched Amazon Bio Discovery, an artificial intelligence application designed …

  11. Users accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code; employees publicly deny the company degrades models to manage capacity (Carl Franzen/VentureBeat)

    Carl Franzen / VentureBeat : Users accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code; employees publicly deny the company degrades models to manage capacity —  A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance …

  12. Kraken co-CEO Arjun Sethi says the crypto exchange has confidentially filed for a US IPO; it was valued at $13.3B this month, down from a $20B peak in late 2025 (Cory Schouten/Semafor)

    Cory Schouten / Semafor : Kraken co-CEO Arjun Sethi says the crypto exchange has confidentially filed for a US IPO; it was valued at $13.3B this month, down from a $20B peak in late 2025 —  The US crypto exchange Kraken has confidentially filed for an initial public offering, co-CEO Arjun Sethi said Tuesday …

Solidot(40)

  1. 扎克伯格可能很快会有他的 AI 克隆

    FT 报道,Meta 正在构建一个 AI 版本的扎克伯格(Mark Zuckerberg),代替真人与员工互动。报道援引知情人士的消息称,这是该公司目前的优先事项,扎克伯格本人亲自参与了 AI 的训练和测试。AI 的训练内容包括他的举止、语气和公开发表的声明,以及近期对公司战略的思考,以便员工能通过与其互动感受到与创始人更紧密的联系。知情人士称,这项工作的重点之一是制作逼真的虚拟 AI 角色,因为需要大量的算力才能实现逼真的效果以及避免在与用户交互时出现延迟,因此扩大规模存在困难之处。如果实验成功的话,未来网红和内容创作者也可以采用这项技术。

  2. 计算机科学的黄金期可能已结束

    2025 年秋季美国四年制大学计算机科学专业的学生入学人数下降了 8.1%。计算机科学专业的本科排名在一年内从第四位跌至第六位,前三则一直是商科、公共卫生和人文科学。从 2008 年到 2024 年,计算机科学一直是美国增长最快的专业,如今它的黄金期可能已经结束。美国主修计算机科学的人数比上一学年少了 54000 人。那么他们选择了什么新专业?数据分析和数据科学招生总人数逾 3.5 万人,而 2020 年它们刚拆分出来时只招了几百人。数据显示,部分有意计算机科学专业的学生转向了相关领域如机器人学。工程专业学生入学人数 2025 年秋季增长了 7.3%,其中增长最快的两个专业是机械工程和电气工程专业,分别增长了 11% 和 14%。大学教授认为由于计算机科学毕业人数供过于求,学生们可能认为机械工程专业更通用,能在 AI 驱动的世界里提供更好的就业机会,如机器人、无人机、航空航天和电动汽车等行业。

  3. 《传送门2社区版》将于 4 月 18 日公测

    《传送门2(Portal 2)》自 2011 年 4 月发布至今已有 15 年,期间模组开发者推出了多个衍生版本,包括《Portal Stories: Mel》、《Aperture Tag: The Paint Gun Testing Initiative》以及《Portal Reloaded》等,现在由 P2:CE Team 开发的最新社区版本《传送门2社区版(Portal 2: Community Edition)》将于 4 月 18 日公测。《社区版》升级了引擎,使用了官方授权、基于 CS:GO 的 Source 引擎重度修改版本 Strata Source,原生支持 64 位改进了性能,新增原生 DirectX 11 支持,移除了旧引擎的很多限制,更新或改进了游戏玩法,提供了允许玩家轻松扩展游戏机制的脚本框架 AngelScript,采用了 Source 2 引擎的 Panorama UI 框架等等。《社区版》将免费提供给现有的《传送门2》玩家。

  4. 长期接触农药可能诱发糖尿病

    2023 年全球农药使用量达 373 万吨,约为 1990 年的两倍。农药相关健康风险研究长期集中在急性中毒、神经毒性和癌症方面。新型基因技术如今已能用于追踪农药对肠道菌群的影响。印度团队对印度南部近 3000 人开展研究后发现,城市地区 23% 的人患有糖尿病,多与肥胖、高胆固醇等典型危险因素相关;但农村地区糖尿病患病率仍高达 16%,且与这些危险因素无关。研究人员怀疑环境化学物质可能发挥了作用研究团队在小鼠身上研究了一种广泛使用的农业杀虫剂——氯氰菊酯的影响。根据印度日常饮食中的农药残留量,研究团队采用了“现实剂量”,持续给药 120 天。研究显示,氯氰菊酯重塑了小鼠肠道菌群,其中乳酸杆菌等有益菌数量下降,幽门螺杆菌等潜在有害菌增多。即便体重没有增加,接触氯氰菊酯的小鼠仍出现了高血糖和糖尿病症状。农药似乎不仅会改变菌群种类,还会影响其代谢活性。在另一项大型研究中,研究人员将 17 种人体肠道代表性细菌暴露于 18 种不同农药,检测到微生物产生的数百种小分子物质发生变化,其中包括短链脂肪酸、胆汁酸和色氨酸相关分子。这些物质能维持肠道黏膜健康、调节炎症反应、调控免疫功能。他们还发现,部分细菌会在细胞内蓄积农药,这可能延长其在人体内的停留时间,增加长期健康风险。

  5. Google Play 下架《心跳文学部》

    Google Play 下架了《心跳文学部(Doki Doki Literature Club)》,理由是游戏内容违反了与敏感主题相关的服务条款。作者 Dan Salvato 在一份声明中表示在致力于让游戏在 Google Play 重新上架。《心跳文学部》描述了一位男高中生加入学校的文学部与四位女性成员交流的故事,看起来是一个简单的恋爱视觉小说,但在完成一个结局之后故事会变得非常古怪,游戏会通过删除文件和存档的方式打破第四面墙。《心跳文学部》的免费版本积累了逾千万下载量,是 Steam 平台排名第一的心理恐怖游戏。关于敏感主题《心跳文学部》会在启动之后发出多次警告。《心跳文学部》有 iOS、Nintendo Switch、PlayStation 等各种版本。

  6. Valve 工程师改进 Linux 游戏的显存占用

    随着游戏日益图像密集,显存占用愈来愈成为一大问题。提升视觉保真度需要将越来越多的游戏素材储存在显卡的显存内。但显存的容量有限,不是人人桌面上都有 128 GB 大显存的数据中心级 GPU。Valve 工程师 Natalie Vock 开发了新的内核补丁和两个专门的工具去解决容量在 8GB 以内的显卡显存占用问题。她的补丁主要针对 AMD GPU,英特尔的 Xe 显卡也支持,但使用私有驱动的英伟达显卡不支持——原因是英伟达私有内核模块不支持 dmem cgroups。她的方法主要是确保前台运行的游戏对显存有优先使用权,如果显存开始占满,后台任务占用的显存将优先转移到系统内存。在有 8GB 显存的显卡上运行《赛博朋克 2077》,有 1.37GB 的显存溢出到 GTT(Graphics Translation Table),游戏实际上只用了 6GB 显存,应用补丁之后游戏占用的显存提高到 7.4GB,GTT 减少到 650MB。

  7. Google 将惩罚“后退按钮劫持”行为

    今天的很多网站不让用户“后退”,但到了 6 月 15 日,如果网站还这么做,Google 将会进行惩罚,大幅降低网站的搜索排名。Google 将把这种被称为“后退按钮劫持”的做法定性为恶意行为。“后退按钮劫持”旨在强行将用户留在网站以增加流量,当访客试图通过后退按钮返回上一页,网站会篡改页面浏览历史记录,在用户点击后退按钮时插入其他内容。Google 表示,后退按钮应该始终执行用户预期的功能——返回上一页,任何其他行为都属于一种欺骗性的用户体验。

  8. 德国主权科技基金向 Mastodon 资助 61.4 万欧元

    德国主权科技基金(Sovereign Tech Fund)向联邦宇宙微博客项目 Mastodon 资助 61.4 万欧元,用于支持 Mastodon 及其软件生态系统的改进和更新。这笔资金将投入到改进:黑名单同步;新的 Fediverse Auxiliary Service Provider(FASP)允许服务器之间共享存储和媒体处理资源;自动化内容检测;私信端到端加密;改进文档。相关改进预计在 2026 年底到 2027 年完成。

  9. OpenSSL 4.0 释出

    OpenSSL 项目释出了 v4.0 版本。主要新特性包括:支持 Encrypted Client Hello (ECH),通过加密初始 TLS 握手以及隐藏服务器名称指示(SNI)提供更好的隐私保护;移除 SSLv3 等旧协议/引擎支持;通过支持 RFC 8998 改进后量子加密支持;移除 SSLv2 Client Hello,停止支持 Darwin i386 和 PowerPC/PPC64 等。

  10. Servo 发布首个 crates.io 版本

    Rust 语言开发的浏览器渲染引擎项目 Servo 释出了 servo crate v0.1.0,这是它发布的首个 crates.io 版本,允许 Servo 作为一个库供其他项目使用。Servo 源自 Mozilla,2020 年 8 月 Mozilla 在裁员时砍掉了 Servo 引擎团队的大部分成员。Servo 项目之后脱离 Mozilla 成为一个独立项目,由 Linux 基金会托管,旨在为其它项目提供一个嵌入的高性能的、安全的渲染引擎。Servo 项目于 2025 年 10 月释出了 v0.0.1 版本,之后以每月发布一个新版本的频率发布。开发者表示他们计划以每半年更新一次的频率提供长期支持版本(LTS),因为嵌入开发者可能无法及时更新到最新 Servo 版本,他们更适合使用 LTS 版本。LTS 版本预计将提供九个月的安全更新。

  11. 斯坦福的 AI 报告认为中美差距微乎其微

    斯坦福大学研究院 Institute for Human-Centered Artificial Intelligence(HAI)发布了年度报告 AI Index,报告认为中国顶级 AI 与美国 AI 相差无几。2024 年 1 月美国顶级 AI 的得分比中国顶级 AI 高 10% 左右,到 2026 年 3 月美国 Anthropic 和字节跳动的 AI 得分差距仅为 2.7%。在衡量语言、数学和编程领域难题正确率的基准测试中,差距也在缩小,中美之间的性能差距已基本消除。在开发和运营数据中心数量方面,美国有 5427 个遥遥领先于其他国家,2025 年民间投资额美国也以 2859 亿美元遥遥领先其他国家。中国的民间投资仅为 124 亿美元,但政府投资较大,实际投资额尚不明确。在被引用最多的前 100 篇论文中,中国的论文在 2024 年达到 41 篇,比上年增加 7 篇,缩小了与排名第一的美国(46 篇)的差距。

  12. 人类止痛药对龙虾有效

    根据发表在《Scientific Reports》期刊上的一项研究,龙虾能感受到疼痛,而人类止痛药也能帮助它止痛。这项研究再次表明需要为甲壳类动物开发出更人道的宰杀方法。新研究发现,当挪威龙虾在水中遭受电击时它们会快速摆动尾巴试图逃脱。但如果用止痛药阿司匹林和利多卡因预先为龙虾镇痛,尾巴摆动会减少甚至消失。论文合作者 Lynne Sneddon 称人类止痛药对挪威龙虾也有效,这表明人类生理功能与龙虾十分相似。人类应像对待鸡和牛一样重视对甲壳类动物的饲养和宰杀方式。