Weekly Digest — 2026-W17
216 unique stories (2026-04-20 → 2026-04-26), aggregated across 8 sources.
Hacker News(42)
- John Ternus to become Apple CEO (www.apple.com)
- AI Resistance: some recent anti-AI stuff that’s worth discussing (stephvee.ca)
- At long last, InfoWars is ours (theonion.com)
- We accepted surveillance as default (vivianvoss.net)
- Not buying another Kindle (www.androidauthority.com)
- Deezer says 44% of songs uploaded to its platform daily are AI-generated (techcrunch.com)
- Changes to GitHub Copilot individual plans (github.blog)
- Claude Code removed from Anthropic's Pro plan (claude.com)
- ChatGPT Images 2.0 (openai.com)
- Framework Laptop 13 Pro (frame.work)
- Meta to start capturing employee mouse movements, keystrokes for AI training (www.reuters.com)
- The Vercel breach: OAuth attack exposes risk in platform environment variables (www.trendmicro.com)
GitHub Trending(24)
Product Hunt(42)
- Co-Tasker
Book local pros for quick & affordable help
- CONA
E-commerce accounting that runs itself
- Silex
Swiss legal AI, built by lawyers for lawyers
- Tetractys
AI for biomanufacturers
- Auxilius.ai
Turn compliance into code with agentic AI
- Getpin
Pin business, get interest, be found
- Magic Lane
Sovereign navigation infrastructure for Europe
- RankAI
RankAI autonomously gets you buyers from Google & AI Search
- Pioneer
Fine-tune any LLM in minutes, with one prompt
- Spectrum
Bring agents to all the interfaces people already use
- Kimi K2.6
Open-source SOTA for long-horizon coding and agent swarms
- OnTheMap
The global map for builders, founders, and visionaries
Hugging Face(30)
- Elucidating the SNR-t Bias of Diffusion Probabilistic Models
Diffusion Probabilistic Models have demonstrated remarkable performance across a wide range of generative tasks. However, we have observed that these models often suffer from a Signal-to-Noise Ratio-timestep (SNR-t) bias. This bias refers to the misalignment between the SNR of the denoising sample and its corresponding timestep during the inference phase. Specifically, during training, the SNR of a sample is strictly coupled with its timestep. However, this correspondence is disrupted during inference, leading to error accumulation and impairing the generation quality. We provide comprehensive empirical evidence and theoretical analysis to substantiate this phenomenon and propose a simple yet effective differential correction method to mitigate the SNR-t bias. Recognizing that diffusion models typically reconstruct low-frequency components before focusing on high-frequency details during the reverse denoising process, we decompose samples into various frequency components and apply differential correction to each component individually. Extensive experiments show that our approach significantly improves the generation quality of various diffusion models (IDDPM, ADM, DDIM, A-DPM, EA-DPM, EDM, PFGM++, and FLUX) on datasets of various resolutions with negligible computational overhead. The code is at https://github.com/AMAP-ML/DCW.
- Maximal Brain Damage Without Data or Optimization: Disrupting Neural Networks via Sign-Bit Flips
Deep Neural Networks (DNNs) can be catastrophically disrupted by flipping only a handful of parameter bits. We introduce Deep Neural Lesion (DNL), a data-free and optimizationfree method that locates critical parameters, and an enhanced single-pass variant, 1P-DNL, that refines this selection with one forward and backward pass on random inputs. We show that this vulnerability spans multiple domains, including image classification, object detection, instance segmentation, and reasoning large language models. In image classification, flipping just two sign bits in ResNet-50 on ImageNet reduces accuracy by 99.8%. In object detection and instance segmentation, one or two sign flips in the backbone collapse COCO detection and mask AP for Mask R-CNN and YOLOv8-seg models. In language modeling, two sign flips into different experts reduce Qwen3-30B-A3B-Thinking from 78% to 0% accuracy. We also show that selectively protecting a small fraction of vulnerable sign bits provides a practical defense against such attacks.
- PersonaVLM: Long-Term Personalized Multimodal LLMs
Multimodal Large Language Models (MLLMs) serve as daily assistants for millions. However, their ability to generate responses aligned with individual preferences remains limited. Prior approaches enable only static, single-turn personalization through input augmentation or output alignment, and thus fail to capture users' evolving preferences and personality over time (see Fig.1). In this paper, we introduce PersonaVLM, an innovative personalized multimodal agent framework designed for long-term personalization. It transforms a general-purpose MLLM into a personalized assistant by integrating three key capabilities: (a) Remembering: It proactively extracts and summarizes chronological multimodal memories from interactions, consolidating them into a personalized database. (b) Reasoning: It conducts multi-turn reasoning by retrieving and integrating relevant memories from the database. (c) Response Alignment: It infers the user's evolving personality throughout long-term interactions to ensure outputs remain aligned with their unique characteristics. For evaluation, we establish Persona-MME, a comprehensive benchmark comprising over 2,000 curated interaction cases, designed to assess long-term MLLM personalization across seven key aspects and 14 fine-grained tasks. Extensive experiments validate our method's effectiveness, improving the baseline by 22.4% (Persona-MME) and 9.8% (PERSONAMEM) under a 128k context, while outperforming GPT-4o by 5.2% and 2.0%, respectively. Project page: https://PersonaVLM.github.io.
- Web Retrieval-Aware Chunking (W-RAC) for Efficient and Cost-Effective Retrieval-Augmented Generation Systems
Retrieval-Augmented Generation (RAG) systems critically depend on effective document chunking strategies to balance retrieval quality, latency, and operational cost. Traditional chunking approaches, such as fixed-size, rule-based, or fully agentic chunking, often suffer from high token consumption, redundant text generation, limited scalability, and poor debuggability, especially for large-scale web content ingestion. In this paper, we propose Web Retrieval-Aware Chunking (W-RAC), a novel, cost-efficient chunking framework designed specifically for web-based documents. W-RAC decouples text extraction from semantic chunk planning by representing parsed web content as structured, ID-addressable units and leveraging large language models (LLMs) only for retrieval-aware grouping decisions rather than text generation. This significantly reduces token usage, eliminates hallucination risks, and improves system observability.Experimental analysis and architectural comparison demonstrate that W-RAC achieves comparable or better retrieval performance than traditional chunking approaches while reducing chunking-related LLM costs by an order of magnitude.
- Qwen3.5-Omni Technical Report
In this work, we present Qwen3.5-Omni, the latest advancement in the Qwen-Omni model family. Representing a significant evolution over its predecessor, Qwen3.5-Omni scales to hundreds of billions of parameters and supports a 256k context length. By leveraging a massive dataset comprising heterogeneous text-vision pairs and over 100 million hours of audio-visual content, the model demonstrates robust omni-modality capabilities. Qwen3.5-Omni-plus achieves SOTA results across 215 audio and audio-visual understanding, reasoning, and interaction subtasks and benchmarks, surpassing Gemini-3.1 Pro in key audio tasks and matching it in comprehensive audio-visual understanding. Architecturally, Qwen3.5-Omni employs a Hybrid Attention Mixture-of-Experts (MoE) framework for both Thinker and Talker, enabling efficient long-sequence inference. The model facilitates sophisticated interaction, supporting over 10 hours of audio understanding and 400 seconds of 720P video (at 1 FPS). To address the inherent instability and unnaturalness in streaming speech synthesis, often caused by encoding efficiency discrepancies between text and speech tokenizers, we introduce ARIA. ARIA dynamically aligns text and speech units, significantly enhancing the stability and prosody of conversational speech with minimal latency impact. Furthermore, Qwen3.5-Omni expands linguistic boundaries, supporting multilingual understanding and speech generation across 10 languages with human-like emotional nuance. Finally, Qwen3.5-Omni exhibits superior audio-visual grounding capabilities, generating script-level structured captions with precise temporal synchronization and automated scene segmentation. Remarkably, we observed the emergence of a new capability in omnimodal models: directly performing coding based on audio-visual instructions, which we call Audio-Visual Vibe Coding.
- Cut Your Losses! Learning to Prune Paths Early for Efficient Parallel Reasoning
Parallel reasoning enhances Large Reasoning Models (LRMs) but incurs prohibitive costs due to futile paths caused by early errors. To mitigate this, path pruning at the prefix level is essential, yet existing research remains fragmented without a standardized framework. In this work, we propose the first systematic taxonomy of path pruning, categorizing methods by their signal source (internal vs. external) and learnability (learnable vs. non-learnable). This classification reveals the unexplored potential of learnable internal methods, motivating our proposal of STOP (Super TOken for Pruning). Extensive evaluations across LRMs ranging from 1.5B to 20B parameters demonstrate that STOP achieves superior effectiveness and efficiency compared to existing baselines. Furthermore, we rigorously validate the scalability of STOP under varying compute budgets - for instance, boosting GPT-OSS-20B accuracy on AIME25 from 84% to nearly 90% under fixed compute budgets. Finally, we distill our findings into formalized empirical guidelines to facilitate optimal real-world deployment. Code, data and models are available at https://bijiaxihh.github.io/STOP
- Extending One-Step Image Generation from Class Labels to Text via Discriminative Text Representation
Few-step generation has been a long-standing goal, with recent one-step generation methods exemplified by MeanFlow achieving remarkable results. Existing research on MeanFlow primarily focuses on class-to-image generation. However, an intuitive yet unexplored direction is to extend the condition from fixed class labels to flexible text inputs, enabling richer content creation. Compared to the limited class labels, text conditions pose greater challenges to the model's understanding capability, necessitating the effective integration of powerful text encoders into the MeanFlow framework. Surprisingly, although incorporating text conditions appears straightforward, we find that integrating powerful LLM-based text encoders using conventional training strategies results in unsatisfactory performance. To uncover the underlying cause, we conduct detailed analyses and reveal that, due to the extremely limited number of refinement steps in the MeanFlow generation, such as only one step, the text feature representations are required to possess sufficiently high discriminability. This also explains why discrete and easily distinguishable class features perform well within the MeanFlow framework. Guided by these insights, we leverage a powerful LLM-based text encoder validated to possess the required semantic properties and adapt the MeanFlow generation process to this framework, resulting in efficient text-conditioned synthesis for the first time. Furthermore, we validate our approach on the widely used diffusion model, demonstrating significant generation performance improvements. We hope this work provides a general and practical reference for future research on text-conditioned MeanFlow generation. The code is available at https://github.com/AMAP-ML/EMF.
- OneVL: One-Step Latent Reasoning and Planning with Vision-Language Explanation
Chain-of-Thought (CoT) reasoning has become a powerful driver of trajectory prediction in VLA-based autonomous driving, yet its autoregressive nature imposes a latency cost that is prohibitive for real-time deployment. Latent CoT methods attempt to close this gap by compressing reasoning into continuous hidden states, but consistently fall short of their explicit counterparts. We suggest that this is due to purely linguistic latent representations compressing a symbolic abstraction of the world, rather than the causal dynamics that actually govern driving. Thus, we present OneVL (One-step latent reasoning and planning with Vision-Language explanations), a unified VLA and World Model framework that routes reasoning through compact latent tokens supervised by dual auxiliary decoders. Alongside a language decoder that reconstructs text CoT, we introduce a visual world model decoder that predicts future-frame tokens, forcing the latent space to internalize the causal dynamics of road geometry, agent motion, and environmental change. A three-stage training pipeline progressively aligns these latents with trajectory, language, and visual objectives, ensuring stable joint optimization. At inference, the auxiliary decoders are discarded and all latent tokens are prefilled in a single parallel pass, matching the speed of answer-only prediction. Across four benchmarks, OneVL becomes the first latent CoT method to surpass explicit CoT, delivering state-of-the-art accuracy at answer-only latency, and providing direct evidence that tighter compression, when guided in both language and world-model supervision, produces more generalizable representations than verbose token-by-token reasoning. Project Page: https://xiaomi-embodied-intelligence.github.io/OneVL
- Agent-World: Scaling Real-World Environment Synthesis for Evolving General Agent Intelligence
Large language models are increasingly expected to serve as general-purpose agents that interact with external, stateful tool environments. The Model Context Protocol (MCP) and broader agent skills offer a unified interface for connecting agents with scalable real-world services, but training robust agents remains limited by the lack of realistic environments and principled mechanisms for life-long learning. In this paper, we present Agent-World, a self-evolving training arena for advancing general agent intelligence through scalable environments. Agent-World has two main components: (1) Agentic Environment-Task Discovery, which autonomously explores topic-aligned databases and executable tool ecosystems from thousands of real-world environment themes and synthesizes verifiable tasks with controllable difficulty; and (2) Continuous Self-Evolving Agent Training, which combines multi-environment reinforcement learning with a self-evolving agent arena that automatically identifies capability gaps through dynamic task synthesis and drives targeted learning, enabling the co-evolution of agent policies and environments. Across 23 challenging agent benchmarks, Agent-World-8B and 14B consistently outperforms strong proprietary models and environment scaling baselines. Further analyses reveal scaling trends in relation to environment diversity and self-evolution rounds, offering insights for building general agent intelligence.
- OpenGame: Open Agentic Coding for Games
Game development sits at the intersection of creative design and intricate software engineering, demanding the joint orchestration of game engines, real-time loops, and tightly coupled state across many files. While Large Language Models (LLMs) and code agents now solve isolated programming tasks with ease, they consistently stumble when asked to produce a fully playable game from a high-level design, collapsing under cross-file inconsistencies, broken scene wiring, and logical incoherence. We bridge this gap with OpenGame, the first open-source agentic framework explicitly designed for end-to-end web game creation. At its core lies Game Skill, a reusable, evolving capability composed of a Template Skill that grows a library of project skeletons from experience and a Debug Skill that maintains a living protocol of verified fixes - together enabling the agent to scaffold stable architectures and systematically repair integration errors rather than patch isolated syntax bugs. Powering this framework is GameCoder-27B, a code LLM specialized for game engine mastery through a three-stage pipeline of continual pre-training, supervised fine-tuning, and execution-grounded reinforcement learning. Since verifying interactive playability is fundamentally harder than checking static code, we further introduce OpenGame-Bench, an evaluation pipeline that scores agentic game generation along Build Health, Visual Usability, and Intent Alignment via headless browser execution and VLM judging. Across 150 diverse game prompts, OpenGame establishes a new state-of-the-art. We hope OpenGame pushes code agents beyond discrete software engineering problems and toward building complex, interactive real-world applications. Our framework will be fully open-sourced.
- MultiWorld: Scalable Multi-Agent Multi-View Video World Models
Video world models have achieved remarkable success in simulating environmental dynamics in response to actions by users or agents. They are modeled as action-conditioned video generation models that take historical frames and current actions as input to predict future frames. Yet, most existing approaches are limited to single-agent scenarios and fail to capture the complex interactions inherent in real-world multi-agent systems. We present MultiWorld, a unified framework for multi-agent multi-view world modeling that enables accurate control of multiple agents while maintaining multi-view consistency. We introduce the Multi-Agent Condition Module to achieve precise multi-agent controllability, and the Global State Encoder to ensure coherent observations across different views. MultiWorld supports flexible scaling of agent and view counts, and synthesizes different views in parallel for high efficiency. Experiments on multi-player game environments and multi-robot manipulation tasks demonstrate that MultiWorld outperforms baselines in video fidelity, action-following ability, and multi-view consistency. Project page: https://multi-world.github.io/
- EasyVideoR1: Easier RL for Video Understanding
Reinforcement learning from verifiable rewards (RLVR) has demonstrated remarkable effectiveness in improving the reasoning capabilities of large language models. As models evolve into natively multimodal architectures, extending RLVR to video understanding becomes increasingly important yet remains largely unexplored, due to the diversity of video task types, the computational overhead of repeatedly decoding and preprocessing high-dimensional visual inputs, and the difficulty of reproducible evaluation across numerous sensitive hyperparameters. Existing open-source RL training frameworks provide solid infrastructure for text and image scenarios but lack systematic optimizations tailored for video modality. In this work, we present EasyVideoR1, a complete and efficient reinforcement learning framework specifically designed for training large vision-language models on video understanding tasks. EasyVideoR1 makes the following contributions: (1) a full video RL training pipeline with offline preprocessing and tensor caching that eliminates redundant video decoding and yields a 1.47 times throughput improvement; (2) a comprehensive, task-aware reward system covering 11 distinct video and image problem types with unified routing and modular extension; (3) a mixed offline-online data training paradigm that combines curated high-quality trajectories with on-policy exploration, benefiting the learning of more challenging tasks; (4) joint image-video training with independently configurable pixel budgets, allowing the two modalities to mutually reinforce each other; and (5) an asynchronous multi-benchmark evaluation framework covering 22 mainstream video understanding benchmarks, with reproduced accuracy closely aligned with officially reported scores.
Techmeme(42)
- John Ternus, senior VP of Hardware Engineering, will become Apple's next CEO on September 1; Tim Cook will become executive chairman of Apple's board (CNBC)
CNBC : John Ternus, senior VP of Hardware Engineering, will become Apple's next CEO on September 1; Tim Cook will become executive chairman of Apple's board — Apple said on Monday that John Ternus is succeeding Tim Cook as CEO, with Cook assuming the role of executive chairman on Sept. 1.
- OpenAI rolls out Chronicle, which builds memories from screen captures to make Codex more aware of context, as a research preview for Pro subscribers on macOS (Zac Hall/9to5Mac)
Zac Hall / 9to5Mac : OpenAI rolls out Chronicle, which builds memories from screen captures to make Codex more aware of context, as a research preview for Pro subscribers on macOS — Last week, OpenAI released an all-new version of Codex for Mac that includes the best example of AI-driven computer use yet.
- Apple says Johny Srouji, who most recently served as senior VP of Hardware Technologies, will assume an expanded role leading Hardware Engineering (Apple)
Apple : Apple says Johny Srouji, who most recently served as senior VP of Hardware Technologies, will assume an expanded role leading Hardware Engineering — Apple today announced that, effective immediately, Apple executive Johny Srouji will become chief hardware officer.
- Amazon agrees to invest up to $25B in Anthropic, on top of the $8B that it has already invested; Anthropic commits to spend $100B+ on AWS over the next 10 years (Ashley Capoot/CNBC)
Ashley Capoot / CNBC : Amazon agrees to invest up to $25B in Anthropic, on top of the $8B that it has already invested; Anthropic commits to spend $100B+ on AWS over the next 10 years — Amazon has agreed to invest up to $25 billion in Anthropic, on top of the $8 billion that it's poured into the artificial intelligence startup …
- John Ternus, senior VP of Hardware Engineering, will become Apple's next CEO on September 1; Tim Cook will become executive chairman of Apple's board (Apple)
Apple : John Ternus, senior VP of Hardware Engineering, will become Apple's next CEO on September 1; Tim Cook will become executive chairman of Apple's board — Apple® announced that Tim Cook will become executive chairman of Apple's board of directors and John Ternus, senior vice president …
- Microsoft pauses new GitHub Copilot signups for Pro, Pro+, and Student tiers, tightens usage limits, removes Opus models from Pro, and limits Opus 4.7 to Pro+ (The GitHub Blog)
The GitHub Blog : Microsoft pauses new GitHub Copilot signups for Pro, Pro+, and Student tiers, tightens usage limits, removes Opus models from Pro, and limits Opus 4.7 to Pro+ — As shared in our recent blog post, we're making the following changes to Copilot plans for individuals as part of our ongoing efforts …
- Trump Media & Technology Group names Kevin McGurn as interim CEO effective immediately; McGurn previously worked as an executive at Hulu, Vevo, and T-Mobile (Todd Spangler/Variety)
Todd Spangler / Variety : Trump Media & Technology Group names Kevin McGurn as interim CEO effective immediately; McGurn previously worked as an executive at Hulu, Vevo, and T-Mobile — Trump Media & Technology Group, the parent company of social-media platform Truth Social and other businesses whose mission is …
- Source: a handful of unauthorized users in a private Discord channel have been accessing Anthropic's Mythos model since the day the company announced it (Rachel Metz/Bloomberg)
Rachel Metz / Bloomberg : Source: a handful of unauthorized users in a private Discord channel have been accessing Anthropic's Mythos model since the day the company announced it — A small group of unauthorized users have accessed Anthropic PBC's new Mythos AI model, a technology that the company says is so powerful …
- Google now offers two research agents: Deep Research, replacing its December preview release, and Deep Research Max, both available via Gemini API paid tiers (The Keyword)
The Keyword : Google now offers two research agents: Deep Research, replacing its December preview release, and Deep Research Max, both available via Gemini API paid tiers — Built with Gemini 3.1 Pro, the new Deep Research agents bring MCP support, native visualizations and unprecedented analytical quality …
- Reliable Robotics, which is developing autonomous aircraft systems for cargo flights, raised $160M led by Nimble Partners, pushing its valuation to ~$1B (Cailley LaPara/Bloomberg)
Cailley LaPara / Bloomberg : Reliable Robotics, which is developing autonomous aircraft systems for cargo flights, raised $160M led by Nimble Partners, pushing its valuation to ~$1B — Reliable Robotics Corp. secured $160 million in new funding — pushing its valuation to nearly $1 billion — as the Silicon Valley startup makes …
- Adobe announces a $25B stock repurchase program through April 30, 2030; Adobe shares have fallen around 30% so far this year (Zaheer Kachwala/Reuters)
Zaheer Kachwala / Reuters : Adobe announces a $25B stock repurchase program through April 30, 2030; Adobe shares have fallen around 30% so far this year — Adobe (ADBE.O) on Tuesday said its board of directors has approved a new $25 billion stock repurchase program through April 30, 2030, sending its shares up around 2% in extended trading.
- The US DOJ says a former ransomware negotiator pleaded guilty to helping cybercriminals extort companies in cyberattacks in five different incidents (Lorenzo Franceschi-Bicchierai/TechCrunch)
Lorenzo Franceschi-Bicchierai / TechCrunch : The US DOJ says a former ransomware negotiator pleaded guilty to helping cybercriminals extort companies in cyberattacks in five different incidents — Angelo Martino, a former ransomware negotiator, has pleaded guilty to helping cybercriminals extort companies in cyberattacks.
Solidot(36)
- 从 2027 年起欧盟销售的智能手机和平板必须能更换电池
根据欧盟的新规定,从 2027 年起欧洲销售的智能手机和平板电脑必须配备可更换电池。此举旨在减少电子垃圾。欧盟地区每年售出约 1.5 亿部智能手机和 2400 万台平板电脑,相当于每年产生约 500 万吨电子垃圾,只有不到四成的电子垃圾被妥善回收。强制电池可更换的规定于 2027 年 2 月 18 日生效,它还规定任何便捷式电子产品的替换电池必须在产品最后一台投放市场后至少五年内继续供应。电池必须能由消费者自行替换,如果需要专用工具则必须在出售时免费提供。欧盟的新规定还要求操作系统的更新必须持续至少五年。
- 诺奖得主对人类再生存 50 年感到悲观
美国理论物理学家 David Gross 因与学生 Frank Anthony Wilczek 发现了量子色动力学中的渐近自由而在 2004 年共同获得诺贝尔物理学奖,2026 年 4 月 18 日他因为其一生对理论物理学的开创性贡献而获得了基础物理学突破奖的特别奖,获得了 300 万美元奖金。他在接受采访时被问到理论物理学是否可能在 50 年内实现大一统时表示,人类能再生存 50 年的概率非常小。他说,人类每年爆发核战争的概率大约为 2%,过去十年大国之间没有签署任何条约,人类正陷入一场惊人的军备竞赛。最近的一系列事件都增加了核战争的风险,2% 的概率已是保守估计了。当今世界有 9 个拥有核武器的国家,其中三个是核超级大国,情况比两个核大国复杂得多,国家之间的协议和规范都在瓦解。他认为人类能再生存一百年的概率微乎其微,再生存两百年的概率更是趋向于无限小了。所以费米提出的“银河系所有文明所有智慧生命都去哪里?为什么不与人类交流?”的悖论的答案是它们已经自我毁灭了。
- GitHub 上项目的伪造星数
在最大的源代码托管平台 GitHub,一个项目的星数曾经是衡量其受欢迎程度的重要指标。因为重要,因此伪造星数或者付费刷星数也日益商业化。卡内基梅隆、北卡州立大学和 Socket 的研究人员在 ICSE 2026 上发表了一项研究,使用工具 StarScout 分析了 20TB GitHub 元数据,涵盖 2019 年到 2024 年 67 亿个事件和 3.26 亿星数,识别了 600 万被怀疑刷的虚假星数,涉及 30.1 万个账户创建的 18,617 个库。付费刷星数在 2024 年急剧恶化,到 7 月 16.66% 有 50 或以上星数的项目涉嫌刷星数。到了 2025 年 1 月,涉嫌刷星数的项目有 90.42% 被官方移除,涉嫌的账号有 57.07% 被关闭。AI 和 LLM(大模型)的项目超过区块链/加密货币,成为刷星数最多的非恶意项目类别。调查发现有几十家网站、以及 Fiverr 卖家和 Telegram 频道提供付费刷星数的服务,价格最低 0.03 美元/星,最高 0.8-0.9 美元/星。清华大学的一项研究发现 QQ 和微信推广群也提供了刷星数的付费服务。
- WireGuard For Windows v1.0 释出
WireGuard 作者 Jason Donenfeld 在邮件列表上宣布 WireGuard For Windows 以及 Windows 下内核模式实现 WireGuardNT 释出 v1.0。WireGuard 是开源 VPN 协议和自由开源软件,旨在获得比 IPsec 和 OpenVPN 更好的性能。项目在 2015 年发布了最早的版本,2020 年其 Linux 版本达到稳定生产阶段,正式合并到内核主线。Windows 版本从测试阶段到成熟又经历了五年时间。
- Brave 推出付费版 Brave Origin,Linux 版免费
Brave 推出了付费版浏览器 Brave Origin,该版本移除了原版内置的变现功能如 Rewards。Origin 可单独下载,或作为现有版本的升级,一次购买即可解锁,可以在多个设备上激活。Origin 的 Linux 版本是免费的,这可能会让 Windows 付费版用户困惑:为什么他们要为别人免费获得的东西付费?
- Sruthi Chandran 当选为 DPL
2026 年 Debian 项目领导人(DPL)选举结束,唯一的候选人、来自印度的图书管理员 Sruthi Chandran 当选,她将于 4 月 21 日上任。Sruthi Chandran 从图书管理员转为自由软件爱好者和 Debian 开发者,自 2016 年以来一直参与 Debian 的 Ruby、JavaScript、Go 和字体软件包的开发,但近期开发不再活跃,她还是 Community Team 代表,Outreach 团队成员,DebConf Committee 成员。她希望能帮助提升 Debian 社区的多元性,推动多元性议题的讨论。
- 创意软件行业向 Adobe 宣战
帝国终将陨落,创意软件行业如今一致认为 Adobe 结束的时代即将到来,它们纷纷以免费或更低的价格提供与 Adobe 竞争的产品——Adobe 的创意软件过去几十年一直被视为行业标准。Cinema 4D 开发商 Maxon 在收购 Autograph 之后向个人用户提供了免费版本,Autograph 是类似 Adobe After Effects 的 动态图形设计软件,此前的永久授权费用高达 1795 美元;Canva 在收购 Affinity 之后将功能上类似 Adobe Illustrator、Photoshop 和 InDesign 的软件 Affinity Designer 2、Affinity Photo 2 和 Affinity Publisher 2 免费提供给用户,它在收购 Cavalry 之后也将其免费,Cavalry 类似 After Effects;苹果于今年 1 月推出了 Creator Studio 套件,包含了 Final Cut Pro、Logic Pro、Pixelmator Pro、Motion、Compressor 和 MainStage 等创意软件,月订费 12.99 美元,相比下 Adobe的 Creative Cloud Pro 每月订阅费用 69.99 美元,苹果没有强制用户订阅,用户仍可购买单个应用的单次授权。
- 吸入可卡因的鲑鱼有更强的冒险精神
人类使用的毒品不可避免的渗透到环境中,被动物如鲨鱼摄入。但动物摄入毒品之后会有什么影响?实验室研究表明,接触过可卡因的水蚤游得更快,小龙虾会冒险游到藏身处之外,在自然界里这是一种危险的行为。根据发表在《Current Biology》期刊上的一项研究,科学家首次在野外环境测试了毒品对鲑鱼的影响。一组 35 条鲑鱼植入了含有可卡因的小型装置,另一组植入了含有可卡因主要代谢物苯甲酰爱康宁(benzoylecgonine)的装置,第三组则是对照组。实验结果显示,对照组的鱼多数生活在距离其放流点 20 公里的地方,可卡因组则生活在更遥远的地方,苯甲酰爱康宁组最远生活在 32 公里外。代谢物相对于可卡因对动物的影响更大,这与实验室观察一致。毒品对动物行为的影响会带来什么长期后果暂时还不清楚,需要进一步研究。
- Palantir 发表受争议的技术共和国宣言
Palantir CEO Alex Karp 及法务主管 Nicholas W. Zamiska 联合发表的《技术共和国(The Technological Republic)》一书 22 点宣言过去几天引发了广泛争议。Palantir 是硅谷亿万富翁 Peter Thiel 联合创办的军事承包商。批评者认为这一宣言是法西斯主义,充斥着保守派对自由主义的陈词滥调。宣言称,自由民主社会要赢需要的不只是道德诉求,它需要硬实力,而本世纪的硬实力将建立在软件之上。宣言要求对宗教信仰更宽容,抵制空洞的多元主义的诱惑;认为制造人们喜欢且觉得有用的产品是颓废,硅谷公司应该提供安全;问题不在于 AI 武器是否会被制造出来;问题在于谁建造它们,以及建造的目的是什么,就 AI 武器展开作秀式的辩论纯属浪费时间;兵役应是一种普遍义务,民众同时要对精英保持敬畏和宽容,不应嘲笑马斯克(Elon Musk),反对无情曝光公众人物私生活。宣言还批评西方以包容性之名拒绝定义民族文化。
- F-35 是为不同战争而造的尖端战斗机
美国的 F-35 战斗机项目始于上世纪末到本世纪初,整个项目总生命周期成本预计超两万亿美元,为美国历史上最昂贵的国防采购项目。然而过去几年爆发的战争凸显了现代战争越来越趋向于能大批量生产且损失之后能迅速替换的系统。而 F-35 战斗机数量少,造价昂贵,无法迅速替换,使其并不适合此类持久战。大规模发射的廉价导弹或无人机也让今天昂贵的导弹防御系统面临性价比问题。爱国者和萨德是最先进的导弹防御系统,但造价昂贵且产量有限。F-35 的情况类似。俄乌战争证明,无人系统能以比怀疑者预期更快的速度重塑战场。
- 英国计划学校期间禁止使用手机
英国计划禁止学生在校期间使用手机。政府计划修订《儿童福祉与学校法案(children’s wellbeing and schools bill)》,将学校手机禁令的指导意见变成具有法律效力的正式禁令。英国大部分学校已经制定政策禁止使用手机,数据显示 99.8% 的小学和 90% 的中学限制或禁止学生在校期间使用手机。《儿童福祉与学校法案》被视为英国数十年来最重要的儿童保护立法,其中包括对未入学儿童进行强制性登记、打击儿童社会福利领域的牟利行为,以及创建一个唯一标识符帮助相关机构追踪儿童福利状况等。
- 荣耀人形机器人如何成为半马冠军
智能手机厂商荣耀的人形机器人包揽了北京亦庄半程马拉松的前六名,而去年的冠军天工以及备受关注的宇树科技的 H1 都在终点线附近摔倒。这一结果凸显了荣耀与初创企业之间的可靠性差距。荣耀工程师在谈及该公司人形机器人获胜原因时表示,第一是将智能手机领域积累的技术实力应用到了人形机器人上。第二是体型,参考人类顶尖运动员的身形,将腿长设定为 95 厘米,实现了非常大的改善。第三是自主开发的高性能冷却系统。荣耀机器人所使用的核心散热部件来自华科冷芯(上海)动力科技有限公司。华科冷芯 CEO 陈奇表示,人形机器人要实现持续高速奔跑,面临的核心难题之一是下肢关节电机的散热。高负载奔跑要求高扭矩输出,并产生大量热量,这相当于一个小型“火炉”。一旦电机温度超过安全阈值,可能导致控制器烧毁、永磁体退磁、绕组绝缘损坏等永久性故障。荣耀机器人配置两个华科冷芯的高速悬浮泵,这种液冷技术帮助解决机器人的散热困境。