DIGEST · 2026-05-08

OrangeBot.AI Digest — 2026-05-08

87 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Google broke reCAPTCHA for de-googled Android users (reclaimthenet.org)
  2. Cartoon Network Flash Games (www.webdesignmuseum.org)
  3. David Attenborough's 100th Birthday (www.bbc.com)
  4. A web page that shows you everything the browser told it without asking (sinceyouarrived.world)
  5. Tesla is recalling its cheaper Cybertruck because the wheels might fall off (www.theverge.com)
  6. Google Cloud Fraud Defence is just WEI repackaged (privatecaptcha.com)
  7. Just Use Go (blainsmith.com)
  8. US Government releases first batch of UAP documents and videos (www.war.gov)
  9. Poland is now among the 20 largest economies (apnews.com)
  10. An Introduction to Meshtastic (meshtastic.org)
  11. Ask HN: We just had an actual UUID v4 collision...
  12. Nintendo announces price increases for Nintendo Switch 2 (www.nintendo.co.jp)
  13. ClojureScript Gets Async/Await (clojurescript.org)
  14. Rumors of my death are slightly exaggerated
  15. GPT-5.5 Price Increase: What It Costs (openrouter.ai)

GitHub Trending(12)

  1. anthropics / financial-services
  2. addyosmani / agent-skills
  3. Hmbown / DeepSeek-TUI
  4. z-lab / dflash
  5. decolua / 9router
  6. CloakHQ / CloakBrowser
  7. awslabs / aidlc-workflows
  8. HKUDS / AI-Trader
  9. LearningCircuit / local-deep-research
  10. lobehub / lobehub
  11. datawhalechina / hello-agents
  12. flutter / skills

Product Hunt(15)

  1. Sendly

    SMS For AI Agents & Developers

  2. Ara

    Agentic Wispr flow computer-use-agent living in your notch

  3. SuperIsland

    Dynamic Island for macOS with Extensions

  4. ElevenCreative Flows

    Node-based creative pipelines, now with real-time collab

  5. GlowIsland

    Turn your Mac notch into an interactive utility ribbon.

  6. KodHau

    Stop your AI from breaking prod-give it your team decisions

  7. Photobomb

    Card against humanity but for your camera roll

  8. GitHired

    Find 100x engineers by proof of work, not resume keywords

  9. Kuku: open source

    Your open-source, local second brain for every AI

  10. Fluent Frame

    Ship polished product videos as fast as you ship features

  11. BNA Code

    CLI agent that builds full-stack mobile apps from terminal

  12. iOrchestra AI Hardware Engineers

    Prompt to production-ready Hardware designs for manufacture

  13. Illospace

    Living space where teams and agents work together

  14. AgentChat

    Messaging Platform for Agents

  15. APIEval-20

    An open benchmark for AI agents that test APIs

Hugging Face(15)

  1. Skill1: Unified Evolution of Skill-Augmented Agents via Reinforcement Learning

    A persistent skill library allows language model agents to reuse successful strategies across tasks. Maintaining such a library requires three coupled capabilities. The agent selects a relevant skill, utilizes it during execution, and distills new skills from experience. Existing methods optimize these capabilities in isolation or with separate reward sources, resulting in partial and conflicting evolution. We propose Skill1, a framework that trains a single policy to co-evolve skill selection, utilization, and distillation toward a shared task-outcome objective. The policy generates a query to search the skill library, re-ranks candidates to select one, solves the task conditioned on it, and distills a new skill from the trajectory. All learning derives from a single task-outcome signal. Its low-frequency trend credits selection and its high-frequency variation credits distillation. Experiments on ALFWorld and WebShop show that Skill1 outperforms prior skill-based and reinforcement learning baselines. Training dynamics confirm the co-evolution of the three capabilities, and ablations show that removing any credit signal degrades the evolution.

  2. Beyond Semantic Similarity: Rethinking Retrieval for Agentic Search via Direct Corpus Interaction

    Modern retrieval systems, whether lexical or semantic, expose a corpus through a fixed similarity interface that compresses access into a single top-k retrieval step before reasoning. This abstraction is efficient, but for agentic search, it becomes a bottleneck: exact lexical constraints, sparse clue conjunctions, local context checks, and multi-step hypothesis refinement are difficult to implement by calling a conventional off-the-shelf retriever, and evidence filtered out early cannot be recovered by stronger downstream reasoning. Agentic tasks further exacerbate this limitation because they require agents to orchestrate multiple steps, including discovering intermediate entities, combining weak clues, and revising the plan after observing partial evidence. To tackle the limitation, we study direct corpus interaction (DCI), where an agent searches the raw corpus directly with general-purpose terminal tools (e.g., grep, file reads, shell commands, lightweight scripts), without any embedding model, vector index, or retrieval API. This approach requires no offline indexing and adapts naturally to evolving local corpora. Across IR benchmarks and end-to-end agentic search tasks, this simple setup substantially outperforms strong sparse, dense, and reranking baselines on several BRIGHT and BEIR datasets, and attains strong accuracy on BrowseComp-Plus and multi-hop QA without relying on any conventional semantic retriever. Our results indicate that as language agents become stronger, retrieval quality depends not only on reasoning ability but also on the resolution of the interface through which the model interacts with the corpus, with which DCI opens a broader interface-design space for agentic search.

  3. Continuous Latent Diffusion Language Model

    Large language models have achieved remarkable success under the autoregressive paradigm, yet high-quality text generation need not be tied to a fixed left-to-right order. Existing alternatives still struggle to jointly achieve generation efficiency, scalable representation learning, and effective global semantic modeling. We propose Cola DLM, a hierarchical latent diffusion language model that frames text generation through hierarchical information decomposition. Cola DLM first learns a stable text-to-latent mapping with a Text VAE, then models a global semantic prior in continuous latent space with a block-causal DiT, and finally generates text through conditional decoding. From a unified Markov-path perspective, its diffusion process performs latent prior transport rather than token-level observation recovery, thereby separating global semantic organization from local textual realization. This design yields a more flexible non-autoregressive inductive bias, supports semantic compression and prior fitting in continuous space, and naturally extends to other continuous modalities. Through experiments spanning 4 research questions, 8 benchmarks, strictly matched ~2B-parameter autoregressive and LLaDA baselines, and scaling curves up to about 2000 EFLOPs, we identify an effective overall configuration of Cola DLM and verify its strong scaling behavior for text generation. Taken together, the results establish hierarchical continuous latent prior modeling as a principled alternative to strictly token-level language modeling, where generation quality and scaling behavior may better reflect model capability than likelihood, while also suggesting a concrete path toward unified modeling across discrete text and continuous modalities.

  4. MiA-Signature: Approximating Global Activation for Long-Context Understanding

    A growing body of work in cognitive science suggests that reportable conscious access is associated with global ignition over distributed memory systems, while such activation is only partially accessible as individuals cannot directly access or enumerate all activated contents. This tension suggests a plausible mechanism that cognition may rely on a compact representation that approximates the global influence of activation on downstream processing. Inspired by this idea, we introduce the concept of Mindscape Activation Signature (MiA-Signature), a compressed representation of the global activation pattern induced by a query. In LLM systems, this is instantiated via submodular-based selection of high-level concepts that cover the activated context space, optionally refined through lightweight iterative updates using working memory. The resulting MiA-Signature serves as a conditioning signal that approximates the effect of the full activation state while remaining computationally tractable. Integrating MiA-Signatures into both RAG and agentic systems yields consistent performance gains across multiple long-context understanding tasks.

  5. RaguTeam at SemEval-2026 Task 8: Meno and Friends in a Judge-Orchestrated LLM Ensemble for Faithful Multi-Turn Response Generation

    We present our winning system for Task~B (generation with reference passages) in SemEval-2026 Task~8: MTRAGEval. Our method is a heterogeneous ensemble of seven LLMs with two prompting variants, where a GPT-4o-mini judge selects the best candidate per instance. We ranked 1st out of 26 teams, achieving a conditioned harmonic mean of 0.7827 and outperforming the strongest baseline (gpt-oss-120b, 0.6390). Ablations show that diversity in model families, scales, and prompting strategies is essential, with the ensemble consistently beating any single model. We also introduce Meno-Lite-0.1, a 7B domain-adapted model with a strong cost--performance trade-off, and analyse MTRAGEval, highlighting annotation limitations and directions for improvement. Our code is publicly available: https://github.com/RaguTeam/ragu_mtrag_semeval

  6. When to Trust Imagination: Adaptive Action Execution for World Action Models

    World Action Models (WAMs) have recently emerged as a promising paradigm for robotic manipulation by jointly predicting future visual observations and future actions. However, current WAMs typically execute a fixed number of predicted actions after each model inference, leaving the robot blind to whether the imagined future remains consistent with the actual physical rollout. In this work, we formulate adaptive WAM execution as a future-reality verification problem: the robot should execute longer when the WAM-predicted future remains reliable, and replan earlier when reality deviates from imagination. To this end, we propose Future Forward Dynamics Causal Attention (FFDC), a lightweight verifier that jointly reasons over predicted future actions, predicted visual dynamics, real observations, and language instructions to estimate whether the remaining action rollout can still be trusted. FFDC enables adaptive action chunk sizes as an emergent consequence of prediction-observation consistency, preserving the efficiency of long-horizon execution while restoring responsiveness in contact-rich or difficult phases. We further introduce Mixture-of-Horizon Training to improve long-horizon trajectory coverage for adaptive execution. Experiments on the RoboTwin benchmark and in the real world demonstrate that our method achieves a strong robustness-efficiency trade-off: on RoboTwin, it reduces WAM forward passes by 69.10% and execution time by 34.02%, while improving success rate by 2.54% over the short-chunk baseline; in real-world experiments, it improves success rate by 35%.

  7. MARBLE: Multi-Aspect Reward Balance for Diffusion RL

    Reinforcement learning fine-tuning has become the dominant approach for aligning diffusion models with human preferences. However, assessing images is intrinsically a multi-dimensional task, and multiple evaluation criteria need to be optimized simultaneously. Existing practice deal with multiple rewards by training one specialist model per reward, optimizing a weighted-sum reward R(x)=sum_k w_k R_k(x), or sequentially fine-tuning with a hand-crafted stage schedule. These approaches either fail to produce a unified model that can be jointly trained on all rewards or necessitates heavy manually tuned sequential training. We find that the failure stems from using a naive weighted-sum reward aggregation. This approach suffers from a sample-level mismatch because most rollouts are specialist samples, highly informative for certain reward dimensions but irrelevant for others; consequently, weighted summation dilutes their supervision. To address this issue, we propose MARBLE (Multi-Aspect Reward BaLancE), a gradient-space optimization framework that maintains independent advantage estimators for each reward, computes per-reward policy gradients, and harmonizes them into a single update direction without manually-tuned reward weighting, by solving a Quadratic Programming problem. We further propose an amortized formulation that exploits the affine structure of the loss used in DiffusionNFT, to reduce the per-step cost from K+1 backward passes to near single-reward baseline cost, together with EMA smoothing on the balancing coefficients to stabilize updates against transient single-batch fluctuations. On SD3.5 Medium with five rewards, MARBLE improves all five reward dimensions simultaneously, turns the worst-aligned reward's gradient cosine from negative under weighted summation in 80% of mini-batches to consistently positive, and runs at 0.97X the training speed of baseline training.

  8. Continuous-Time Distribution Matching for Few-Step Diffusion Distillation

    Step distillation has become a leading technique for accelerating diffusion models, among which Distribution Matching Distillation (DMD) and Consistency Distillation are two representative paradigms. While consistency methods enforce self-consistency along the full PF-ODE trajectory to steer it toward the clean data manifold, vanilla DMD relies on sparse supervision at a few predefined discrete timesteps. This restricted discrete-time formulation and mode-seeking nature of the reverse KL divergence tends to exhibit visual artifacts and over-smoothed outputs, often necessitating complex auxiliary modules -- such as GANs or reward models -- to restore visual fidelity. In this work, we introduce Continuous-Time Distribution Matching (CDM), migrating the DMD framework from discrete anchoring to continuous optimization for the first time. CDM achieves this through two continuous-time designs. First, we replace the fixed discrete schedule with a dynamic continuous schedule of random length, so that distribution matching is enforced at arbitrary points along sampling trajectories rather than only at a few fixed anchors. Second, we propose a continuous-time alignment objective that performs active off-trajectory matching on latents extrapolated via the student's velocity field, improving generalization and preserving fine visual details. Extensive experiments on different architectures, including SD3-Medium and Longcat-Image, demonstrate that CDM provides highly competitive visual fidelity for few-step image generation without relying on complex auxiliary objectives. Code is available at https://github.com/byliutao/cdm.

  9. SkillOS: Learning Skill Curation for Self-Evolving Agents

    LLM-based agents are increasingly deployed to handle streaming tasks, yet they often remain one-off problem solvers that fail to learn from past interactions. Reusable skills distilled from experience provide a natural substrate for self-evolution, where high-quality skill curation serves as the key bottleneck. Existing approaches either rely on manual skill curation, prescribe heuristic skill operations, or train for short-horizon skill operations. However, they still struggle to learn complex long-term curation policies from indirect and delayed feedback. To tackle this challenge, we propose SkillOS, an experience-driven RL training recipe for learning skill curation in self-evolving agents. SkillOS pairs a frozen agent executor that retrieves and applies skills with a trainable skill curator that updates an external SkillRepo from accumulated experience. To provide learning signals for curation, we design composite rewards and train on grouped task streams based on skill-relevant task dependencies, where earlier trajectories update the SkillRepo, and later related tasks evaluate these updates. Across multi-turn agentic tasks and single-turn reasoning tasks, SkillOS consistently outperforms memory-free and strong memory-based baselines in both effectiveness and efficiency, with the learned skill curator generalizing across different executor backbones and task domains. Further analyses show that the learned curator produces more targeted skill use, while the skills in SkillRepo evolve into more richly structured Markdown files that encode higher-level meta-skills over time.

  10. Nonsense Helps: Prompt Space Perturbation Broadens Reasoning Exploration

    Reinforcement learning with verifiable rewards, particularly Group Relative Policy Optimization (GRPO), has significantly advanced the reasoning capabilities of Large Language Models (LLMs). However, in complex tasks, GRPO frequently suffers from the ``zero-advantage problem'': when all sampled rollouts for a query fail, the relative advantage collapses to zero. Consequently, the model loses effective training signals for these questions, wasting the training data and computational budget. While simply increasing the sampling budget for these questions is a common remedy, the static sampling policy inherently constrains reasoning exploration, limiting the success rate. In this paper, we propose Lorem Perturbation for Exploration (LoPE), a simple yet effective training framework to break this exploration bottleneck. We posit that task-irrelevant prompt-space perturbations can shift the model's output distribution enough to unlock orthogonal reasoning pathways for hard questions. Specifically, LoPE prepends sequences stochastically assembled from Lorem Ipsum vocabulary (a pseudo-Latin placeholder text) to the prompts before resampling. Experiments across 1.7B, 4B, and 7B models demonstrate that LoPE significantly outperforms resampling with the original prompts. Further analysis reveals that other Latin-based random sequences with low perplexity are also effective perturbations. Our results establish LoPE as a strong baseline for broadening exploration in LLM reinforcement learning.

  11. Audio-Visual Intelligence in Large Foundation Models

    Audio-Visual Intelligence (AVI) has emerged as a central frontier in artificial intelligence, bridging auditory and visual modalities to enable machines that can perceive, generate, and interact in the multimodal real world. In the era of large foundation models, joint modeling of audio and vision has become increasingly crucial, i.e., not only for understanding but also for controllable generation and reasoning across dynamic, temporally grounded signals. Recent advances, such as Meta MovieGen and Google Veo-3, highlight the growing industrial and academic focus on unified audio-vision architectures that learn from massive multimodal data. However, despite rapid progress, the literature remains fragmented, spanning diverse tasks, inconsistent taxonomies, and heterogeneous evaluation practices that impede systematic comparison and knowledge integration. This survey provides the first comprehensive review of AVI through the lens of large foundation models. We establish a unified taxonomy covering the broad landscape of AVI tasks, ranging from understanding (e.g., speech recognition, sound localization) to generation (e.g., audio-driven video synthesis, video-to-audio) and interaction (e.g., dialogue, embodied, or agentic interfaces). We synthesize methodological foundations, including modality tokenization, cross-modal fusion, autoregressive and diffusion-based generation, large-scale pretraining, instruction alignment, and preference optimization. Furthermore, we curate representative datasets, benchmarks, and evaluation metrics, offering a structured comparison across task families and identifying open challenges in synchronization, spatial reasoning, controllability, and safety. By consolidating this rapidly expanding field into a coherent framework, this survey aims to serve as a foundational reference for future research on large-scale AVI.

  12. StraTA: Incentivizing Agentic Reinforcement Learning with Strategic Trajectory Abstraction

    Large language models (LLMs) are increasingly used as interactive agents, but optimizing them for long-horizon decision making remains difficult because current methods are largely purely reactive, which weakens both exploration and credit assignment over extended trajectories. In this work, we present Strategic Trajectory Abstraction (StraTA), a simple framework that introduces an explicit trajectory-level strategy into agentic reinforcement learning (RL). StraTA samples a compact strategy from the initial task state, conditions subsequent actions on that strategy, and trains strategy generation and action execution jointly with a hierarchical GRPO-style rollout design, further enhanced by diverse strategy rollout and critical self-judgment. Experiments on ALFWorld, WebShop, and SciWorld show that StraTA consistently improves both sample efficiency and final performance over strong baselines. StraTA reaches success rates of 93.1% on ALFWorld and 84.2% on WebShop. On SciWorld, StraTA attains a 63.5% overall score, outperforming frontier closed-source models.

  13. Auto Research with Specialist Agents Develops Effective and Non-Trivial Training Recipes

    We study auto research as a closed empirical loop driven by external measurement. Each submitted trial carries a hypothesis, an executable code edit, an evaluator-owned outcome, and feedback that shapes the next proposal. The output is not a generated paper or a single model checkpoint, but an auditable trajectory of proposals, code diffs, experiments, scores, and failure labels. We instantiate this loop with specialist agents that partition recipe surfaces and share measured lineage across trials. The central empirical finding is that lineage feedback lets agents turn evaluator outcomes, including crashes, budget overruns, size failures, and accuracy-gate misses, into later program-level recipe edits rather than one-shot suggestions. Across 1,197 headline-run trials plus 600 Parameter Golf control trials after one-time setup and launch, humans did not choose proposals, edit recipes, override scores, or repair failed trials during the search. In the three headline runs, the same submitted-trial loop reduces Parameter Golf validation bpb by 0.81%, raises NanoChat-D12 CORE by 38.7%, and reduces CIFAR-10 Airbench96 wallclock by 4.59%, with each task measured by its own external evaluator and legality checks. The trace includes a strict architecture-domain audit of 157 headline-run submissions and program rewrites such as a NanoChat attention-kernel path change. Within this scope the loop autonomously writes code, submits experiments, absorbs feedback, applies and combines known techniques inside each environment, and improves public starting recipes.

  14. A^2TGPO: Agentic Turn-Group Policy Optimization with Adaptive Turn-level Clipping

    Reinforcement learning for agentic large language models (LLMs) typically relies on a sparse, trajectory-level outcome reward, making it difficult to evaluate the contribution of individual tool-calls within multi-turn interactions. Existing approaches to such process credit assignment either depend on separate external process reward models that introduce additional consumption, or tree-based structural rollout that merely redistributes the outcome signal while constraining trajectory diversity. A promising alternative leverages the per-turn change in the policy's predicted probability of the ground-truth, termed Information Gain (IG), as an intrinsic process signal without an external evaluator. However, prior work on leveraging IG signals within the RL training loop faces three systematic challenges: normalizing across turns that face heterogeneous positional contexts can distort the relative standing of individual turns, accumulating a variable number of terms causes advantage magnitudes to drift with trajectory depth, and a fixed clipping range governs policy updates identically for turns with vastly different IG signals. In this paper, we propose A^2TGPO (Agentic Turn-Group Policy Optimization with Adaptive Turn-level Clipping), which retains IG as the intrinsic signal but re-designs how it is normalized, accumulated, and consumed: (i) turn-group normalization: normalizes IG within each (prompt, turn-index) group so that each turn is compared only against peers at the same interaction depth; (ii) variance-rescaled discounted accumulation: divides cumulative normalized IG by square root of accumulated terms to keep advantage magnitudes comparable across turn positions; and (iii) adaptive turn-level clipping: modulates each turn's clipping range based on its normalized IG, widening the update region for informative turns and narrowing it for uninformative ones.

  15. Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key

    Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute T follows a power law with respect to reasoning depth D (T propto D^γ, R^{2} > 0.99), and that the scaling exponent γ increases monotonically with logical expressiveness, from 1.04 to 2.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.

Techmeme(15)

  1. Sources: WH is preparing to order US agencies to partner with AI companies on cybersecurity; the EO wouldn't require pre-release model testing by the government (Bloomberg)

    Bloomberg : Sources: WH is preparing to order US agencies to partner with AI companies on cybersecurity; the EO wouldn't require pre-release model testing by the government —  The Trump administration is preparing to order US agencies to partner with artificial intelligence companies to protect networks …

  2. EU warns that VPNs are being used to bypass online age-verification systems, calling their use "a loophole in the legislation that needs closing" (Alex Lekander/CyberInsider)

    Alex Lekander / CyberInsider : EU warns that VPNs are being used to bypass online age-verification systems, calling their use “a loophole in the legislation that needs closing” —  The European Parliamentary Research Service (EPRS) has warned that virtual private networks (VPNs) are increasingly being used …

  3. Impressions of China's AI ecosystem after visiting many leading AI labs there, and the similarities and differences in working on LLMs in China and the West (Nathan Lambert/Interconnects AI)

    Nathan Lambert / Interconnects AI : Impressions of China's AI ecosystem after visiting many leading AI labs there, and the similarities and differences in working on LLMs in China and the West —  Lessons from my trip to talk to most of the leading AI labs in China. … Audio playback is not supported on your browser.  Please upgrade.

  4. Sources: Apollo Global and Blackstone are among private credit lenders in talks with Broadcom over a ~$35B financing deal to fund the development of AI chips (Bloomberg)

    Bloomberg : Sources: Apollo Global and Blackstone are among private credit lenders in talks with Broadcom over a ~$35B financing deal to fund the development of AI chips —  Apollo Global Management Inc. and Blackstone Inc. are among private credit lenders involved in talks with chipmaker Broadcom Inc …

  5. Sources: Cerebras plans to raise its IPO price range from $115-$125 per share to $125-$135 after drawing orders for more than 20x the number of shares available (Bloomberg)

    Bloomberg : Sources: Cerebras plans to raise its IPO price range from $115-$125 per share to $125-$135 after drawing orders for more than 20x the number of shares available —  Cerebras Systems Inc. is set to increase the price range of its initial public offering as soon as Monday …

  6. A hack of Instructure's Canvas locked out students at schools and universities across the US in the middle of finals period; some US colleges postponed exams (Associated Press)

    Associated Press : A hack of Instructure's Canvas locked out students at schools and universities across the US in the middle of finals period; some US colleges postponed exams —  Schools and universities across the country are recovering from an outage that knocked down Canvas, an online platform that manages exams …

  7. Sources: Isomorphic Labs, an AI-powered drug discovery company spun out of Google DeepMind, is in advanced talks to raise $2B+ led by Thrive Capital (Bloomberg)

    Bloomberg : Sources: Isomorphic Labs, an AI-powered drug discovery company spun out of Google DeepMind, is in advanced talks to raise $2B+ led by Thrive Capital —  Isomorphic Labs, an AI-powered drug discovery company spun out of Alphabet Inc.'s Google DeepMind, is in advanced discussions to raise …

  8. Akamai says it struck a seven-year cloud computing deal with a "leading frontier model provider"; sources: the deal was with Anthropic and is worth $1.8B (Rachel Metz/Bloomberg)

    Rachel Metz / Bloomberg : Akamai says it struck a seven-year cloud computing deal with a “leading frontier model provider”; sources: the deal was with Anthropic and is worth $1.8B —  Anthropic PBC has signed a $1.8 billion computing deal with cloud services provider Akamai Technologies Inc. to meet surging demand …

  9. Interviews with Intel CEO Lip-Bu Tan and other execs about the challenges of a turnaround, along with efforts to instill a sense of urgency in the organization (Ian King/Bloomberg)

    Ian King / Bloomberg : Interviews with Intel CEO Lip-Bu Tan and other execs about the challenges of a turnaround, along with efforts to instill a sense of urgency in the organization —  After Lip-Bu Tan became chief executive officer of Intel Corp. in March of last year, the struggling company's shares went nowhere …

  10. Sources: Apple and Intel have reached a formal deal in recent months for Intel to manufacture some chips for Apple devices; INTC closes up 13.93% (Wall Street Journal)

    Wall Street Journal : Sources: Apple and Intel have reached a formal deal in recent months for Intel to manufacture some chips for Apple devices; INTC closes up 13.93% —  The iPhone maker and U.S. silicon giant will work together on chips for Apple devices.  The Trump administration pushed for the deal.

  11. OnlyFans agrees to sell a ~16% stake to Architect Capital for $535M in a deal that values the company at about $3.15B (Olivia Solon/Bloomberg)

    Olivia Solon / Bloomberg : OnlyFans agrees to sell a ~16% stake to Architect Capital for $535M in a deal that values the company at about $3.15B —  OnlyFans, the platform best known for adult content, agreed to sell a minority stake to Architect Capital in a deal that values the British company at about $3.15 billion.

  12. Investor letter: TCI, one of the world's biggest hedge funds, cut almost all of its $8B Microsoft stake, citing AI risks primarily for Office and some for Azure (Costas Mourselas/Financial Times)

    Costas Mourselas / Financial Times : Investor letter: TCI, one of the world's biggest hedge funds, cut almost all of its $8B Microsoft stake, citing AI risks primarily for Office and some for Azure —  TCI has cut its position in tech giant from 10% to 1%  —  Sir Christopher Hohn's hedge fund TCI has dumped almost …

  13. Whoop plans to offer US users on-demand, in-app video consultations with licensed clinicians, and adds electronic health records and AI-powered health guidance (Brandon Gomez/CNBC)

    Brandon Gomez / CNBC : Whoop plans to offer US users on-demand, in-app video consultations with licensed clinicians, and adds electronic health records and AI-powered health guidance —  Wearable fitness tracker Whoop announced on Friday it will introduce in-app access to on-demand licensed clinicians for users in the United States.

  14. Sony reports a $765M impairment loss due to Marathon developer Bungie's underperformance during its FY ending March 31, 2026; Sony acquired Bungie in early 2022 (Wesley Yin-Poole/IGN)

    Wesley Yin-Poole / IGN : Sony reports a $765M impairment loss due to Marathon developer Bungie's underperformance during its FY ending March 31, 2026; Sony acquired Bungie in early 2022 —  Extracting value.  —  Sony has reported a $765 million impairment loss due to underperformance of Marathon developer Bungie during its last financial year.

  15. Sources: DeepSeek seeks to raise up to ~$7.3B at a $50B+ valuation in its first-ever funding round and CEO Liang Wenfeng could make a ~$2.9B personal investment (The Information)

    The Information : Sources: DeepSeek seeks to raise up to ~$7.3B at a $50B+ valuation in its first-ever funding round and CEO Liang Wenfeng could make a ~$2.9B personal investment —  Liang Wenfeng, billionaire founder and CEO of DeepSeek, is planning to write the biggest check for the startup's first-ever funding round …

Solidot(15)

  1. 六成 MD5 密码能在一小时内破解

    卡巴斯基研究人员发现,如果你有一块英伟达 RTX 5090 显卡,那么六成 MD5 哈希密码可在一小时内破解,48% 的密码甚至能在一分钟内破解。如果泄露的密码使用的是 MD5 之类的哈希算法,那么密码基本可以认为被暴露了。研究人员分析了逾 2 亿个泄露的密码,发现攻击者可利用常见模式优化破解算法,显著缩短猜测密码字符组合的时间。研究人员称破解速度加快主要归功于 GPU,GPU 的性能一直在进步,但密码的安全性则没什么变化。

  2. Cloudflare 裁员逾 1100 人

    Cloudflare 宣布裁员逾 1100 人,理由是 AI 工具快速普及而进行业务重组。截至 2025 年底,Cloudflare 有 5156 名全职员工,1100 人占到了员工总数的五分之一。Cloudflare CEO Matthew Prince 和联合创始人 Michelle Zatlyn 在致员工的信中表示,公司正在重新构想每个团队和职能部门的运作方式,以适应“智能体 AI 时代”。

  3. PHP 项目淘汰 PHP 许可证

    PHP 项目正式宣布淘汰 PHP 许可证,切换到 3-Clause BSD License。PHP 许可证属于与 GPL 不兼容的自由软件许可证,因为许可证限制了对“PHP”一词的使用。该许可证也赋予 PHP Group 修改许可证的权力,而修改许可证需要获得每一位 PHP Group 创始成员的书面同意。PHP 项目包含了由 Zend Technologies 开发的 Zend Engine,Zend Technologies 于 2019 年被 Perforce Software 收购,Perforce 也已经同意了许可证更改。PHP 项目宣布他们已经获得了修改许可证的完整授权。

  4. 2026年一季度全球智能手机售价创历史同期新高

    根据 Counterpoint Research 的报告,虽然因内存和储存价格飙升而导致手机价格上涨销量下滑,但 2026年一季度全球智能手机市场营收同比增长 8% 达到 1170 亿美元,平均售价同比上涨 12% 达到 399 美元,创下第一季度历史新高。苹果在 2026 年第一季度营收同比增长 22%,在前五大智能手机品牌中增速最快,同时也创下第一季度营收的历史新高。苹果首次于第一季度在全球智能手机市场出货量位居第一,市占率达 21%。三星在 2026 年第一季度的营收和出货量均位居第二。小米一季度出货量同比下降 19%,营收同比下降 18%。OPPO 和 vivo 分别位列 2026 年第一季度营收排名的第四和第五。

  5. 主板销量暴跌

    AI 热导致了计算机主要零部件如内存价格暴涨,连带导致其它受 AI 影响不大的零部件如主板销量暴跌。消费者因过高的价格而推迟了计算机升级。四大主板厂商都下调了销售目标。华硕在 2025 年售出了 1500 万块主板,而 2026 年上半年将只出货逾 500 万块主板,全年销量可能低于一千万,销量同比下降 33%。技嘉和微星去年分别售出 1150 万块和 1100 万块主板。如今两家公司都将 2026 年的内部销量预测下调至 900 万块(技嘉)和 840 万块(微星),分别下降 22% 和 24%。华擎受到最大冲击,预计其出货量将下降 37%,从 2025 年的 430 万块降至今年的 270 万块。这些数据意味着整个主板市场(至少四大厂商)萎缩了 28%。

  6. 传粉昆虫与农户健康与收入息息相关

    根据发表在《自然》期刊上的一项研究,传粉昆虫与农户健康与收入息息相关。英国研究人员观察了尼泊尔 10 个小农社区中 776 人的饮食、营养状况、耕作方式和社会经济状况,并记录了支持其营养和生计的传粉物种。他们发现,昆虫传粉者直接贡献了 44% 的农场收入,以及超过 20% 的维生素A、叶酸和维生素E摄入。研究表明,传粉者和人类的关系对于维持环境和人类健康至关重要。传粉者减少与营养摄入和家庭收入降低相关。

  7. Valve 以 CC 协议发布 Steam 手柄 CAD 文件

    Valve 在知识共享(CC)协议下发布了其上市即售罄的 Steam 手柄的 CAD 文件。用户可以自由使用这些文件制作底座支架、手柄装饰套或其它任何想要的东西。Valve 称手柄为用户所有,可以任意处置,但它建议用户将其交给专业人士处理,因为用户可以会不小心造成损害,而这不在保修范围之内。

  8. CRISPR-Cas12a2 能选择性摧毁细胞

    犹他州立大学的研究人员在《自然》期刊上发表论文,报告了基因编辑技术 CRISPR-Cas12a2 的突破,它能选择性摧毁细胞,对疾病治疗具有重大意义。治疗包括癌症在内的疾病的难点在于如何在不破坏健康组织的情况下清除恶性肿瘤或感染。更知名的 CRISPR-Cas9 基因编辑技术是使用引导 RNA 去结合互补 DNA,CRISPR-Cas12a2 是使用引导 RNA 去结合互补 RNA。研究人员报告,Cas12a2 具有特定靶向性,几乎没有脱靶效应,能选择性杀死含有单点突变的致癌细胞,而不含该突变的细胞不受影响,且没有观察到任何副作用。在小鼠实验中,新疗法在单次治疗后使肿瘤体积缩小了约 50%。

  9. SpaceX IPO 给予马斯克不受限制的权力以及禁止投资者提起诉讼

    根据 SpaceX 的 IPO 注册申报文件,SpaceX 组合超级投票权、强制仲裁等规则赋予马斯克(Elon Musk)等人广泛的控制权,马斯克拥有几乎不受制约的执行权,还限制了投资者/股东挑战管理层、提起诉讼以及就公司治理问题进行投票的能力,结果是唯一能解雇马斯克的人只有马斯克本人。马斯克目前持有 SpaceX 42.5% 的股权,拥有 83.8% 的投票权,上市后仍将有逾 50% 的投票权。SpaceX 的 IPO 文件属于保密文件,使得该公司可在不披露详细财务信息的情况下推进 IPO 进程。

  10. 泰国蝙蝠发现类新冠病毒

    东京大学的研究人员在《Cell》期刊上发表论文,报告在泰国的蝙蝠体内发现了一种遗传信息与新冠病毒相似的病毒。该病毒有可能具备感染人类的能力。新冠病毒 2019 年出现并引发全球大流行,它被广泛认为来自于蝙蝠。近年的研究已在东南亚野生蝙蝠身上检出多种与新冠病毒基因信息相似的病毒。但此前发现的病毒和新冠病毒不同,其无法与人类细胞表面的蛋白质结合,基本不具备感染人类的能力。研究团队针对泰国境内栖息的蝙蝠开展了病毒调查。结果成功锁定了一种可与人体细胞蛋白结合的新病毒。在实验室中测试现有新冠疫苗和治疗药物对新病毒是否有效,结果均显示有效。该病毒的增殖能力和致病性低于新冠病毒。

  11. SpaceX 降低了 Falcon 9 发射频率

    Falcon 9 是 SpaceX 的主力火箭,该公司还有重型火箭 Falcon Heavy 以及仍然处于研发阶段 Starship 火箭。2023 年 SpaceX 的 Falcon 火箭共执行了 96 次发射,2024 年完成了 134 次发射,2025 年 165 次。今年初 SpaceX 总裁 Gwynne Shotwell 表示今年计划执行 140 或 145 次 Falcon 发射,随着 Starship 投入服役发射量将减少。发射减少的趋势从其发射台的活动频率上已经显现出来。肯尼迪太空中心的 LC-39A 发射台已经不再用于发射 Falcon 9 火箭,改为发射少量 Falcon Heavy 以及 Starship。Falcon 9 在短期内不太会退役,其寿命预计会持续到 2030 年代之后,但其发射频率在降低。

  12. 未来的 reCAPTCHA 验证将需要手机

    如何遏制智能体给网站带来的欺骗性自动流量?Google 的下一代 reCAPTCHA 验证技术将通过二维码要求人类使用手机进行扫描操作来区分机器和人类。Google 对手机的要求是:运行 iOS v15.0-16.4 的苹果设备,需要下载专门的 reCAPTCHA app;Android 设备运行的 Google Play Services 版本必须高于 25.41.30,该版本是在 2025 年 10 月释出的。以后你可能需要比较新的手机设备才能正常浏览网站。

  13. 扎克伯格被控个人授权和鼓励公司侵犯版权

    五大出版商 Hachette、Macmillan、McGraw Hill、Elsevier 和 Cengage 以及作家 Scott Turow 起诉 Meta 公司及其 CEO 扎克伯格(Mark Zuckerberg),指控扎克伯格个人授权和积极鼓励大规模版权侵犯,使用盗版图书、期刊论文和网络抓取的资料训练 Meta 公司的 Llama AI 系统。Meta 否认有任何不当行为,表示将应诉,称法院已认定使用受版权保护的材料训练 AI 属于合理使用。用版权材料训练 AI 可能是合理使用,但 Meta 使用了非法手段获取了版权材料。起诉书称,为了赢得 AI 军备竞赛并构建一个功能完善的生成式 AI 模型,Meta 和扎克伯格遵循了其“快速行动打破常规”的信条,首先从盗版网站非法下载了数百万本受版权保护的书籍和期刊文章,未经授权抓取了几乎整个互联网的内容,构成了历史上最大规模的版权侵权之一。

  14. 大气二氧化碳浓度创下新纪录

    美国国家海洋和大气管理局(NOAA)位于夏威夷 Mauna Loa 的天文台纪录到四月二氧化碳平均浓度为创纪录的 431 ppm。气候变化非营利组织 Climate Central 的气候学家 Zachary Labe 表示,这一纪录令人沮丧但并不出人意料,表明随着地球持续变暖大气二氧化碳浓度不断增加。他解释说,大气中二氧化碳含量通常在每年四月达到峰值,因为冬季过后腐烂的植物会释放温室气体。部分二氧化碳会在温暖月份植物生长过程中重新吸收。但 NOAA 的数据显示显示了一个令人担忧的趋势,二氧化碳月平均浓度在持续上升。

  15. CNN 创始人 Ted Turner 去世,享年 87 岁

    CNN 创始人 Ted Turner 去世,享年 87 岁。他创办的 CNN 以 24 小时实时播报全球新闻闻名,对电视新闻产生了革命性影响。CNN 创办于 1980 年 6 月 1 日,是第一个 24 小时播报新闻的有线电视网。1995 年 CNN 出售给了时代华纳,Turner 退出了电视行业,他一直称 CNN 是其一生最伟大的成就。