DIGEST · 2026-04-30

OrangeBot.AI Digest — 2026-04-30

88 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Rivian allows you to disable all internet connectivity (rivian.com)
  2. LinkedIn scans for 6,278 extensions and encrypts the results into every request (404privacy.com)
  3. U.S. Senators Vote to Ban Themselves from Trading on Prediction Markets (www.wsj.com)
  4. CopyFail was not disclosed to Gentoo developer (www.openwall.com)
  5. Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library (semgrep.dev)
  6. How Mark Klein told the EFF about Room 641A [book excerpt] (thereader.mitpress.mit.edu)
  7. Spain's parliament will act against massive IP blockages by LaLiga (www.democrata.es)
  8. Claude Code refuses requests or charges extra if your commits mention "OpenClaw" (twitter.com)
  9. You can beat the binary search (lemire.me)
  10. How an oil refinery works (www.construction-physics.com)
  11. Meta in row after workers who saw smart glasses users having sex lose jobs (www.bbc.com)
  12. I aggregated 28 US Government auction sites into one search (bidprowl.com)
  13. Belgium stops decommissioning nuclear power plants (dpa-international.com)
  14. GCC 16 has been released (gcc.gnu.org)
  15. Granite 4.1: IBM's 8B Model Matching 32B MoE (firethering.com)

GitHub Trending(13)

  1. warpdotdev / warp
  2. TauricResearch / TradingAgents
  3. mattpocock / skills
  4. obra / superpowers
  5. lukilabs / craft-agents-oss
  6. public-apis / public-apis
  7. 1jehuang / jcode
  8. soxoj / maigret
  9. HunxByts / GhostTrack
  10. iamgio / quarkdown
  11. ghostty-org / ghostty
  12. ForrestKnight / open-source-cs
  13. browserbase / skills

Product Hunt(15)

  1. Hera Launch

    Create studio-quality launch videos with AI

  2. Gemini Deep Research Agent

    Web and MCP research agents, now in Gemini API

  3. Tabstack

    Extract web data and automate browsers, no scraper required.

  4. Wonder

    The AI design agent that works on your canvas

  5. VideoOS by Jupitrr AI

    Your all-in-one video workflow

  6. Sync-in

    Open-source file storage, sharing, collaboration & syncing

  7. Tinfoil

    AI chat and API that keeps your conversations fully private

  8. AstroGrid - Universe Engine

    Explore the entire universe in your browser, in real 3D

  9. MailToDock

    Turn Gmail into Google Tasks with AI-powered

  10. Voice Agent API

    One API to build production-ready voice agents

  11. Symphony

    An open-source spec for Codex orchestration

  12. SuperMind

    Business that Runs Itself

  13. Basedash Dashboard Agent

    Builds entire dashboards from a single prompt

  14. Quarkdown

    Markdown wit LaTeX in a modern typesetting system

  15. ElevenMusic

    AI-assisted music creation with built-in discovery, royalty

Hugging Face(15)

  1. GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents

    We present GLM-5V-Turbo, a step toward native foundation models for multimodal agents. As foundation models are increasingly deployed in real environments, agentic capability depends not only on language reasoning, but also on the ability to perceive, interpret, and act over heterogeneous contexts such as images, videos, webpages, documents, GUIs. GLM-5V-Turbo is built around this objective: multimodal perception is integrated as a core component of reasoning, planning, tool use, and execution, rather than as an auxiliary interface to a language model. This report summarizes the main improvements behind GLM-5V-Turbo across model design, multimodal training, reinforcement learning, toolchain expansion, and integration with agent frameworks. These developments lead to strong performance in multimodal coding, visual tool use, and framework-based agentic tasks, while preserving competitive text-only coding capability. More importantly, our development process offers practical insights for building multimodal agents, highlighting the central role of multimodal perception, hierarchical optimization, and reliable end-to-end verification.

  2. Large Language Models Explore by Latent Distilling

    Generating diverse responses is crucial for test-time scaling of large language models (LLMs), yet standard stochastic sampling mostly yields surface-level lexical variation, limiting semantic exploration. In this paper, we propose Exploratory Sampling (ESamp), a decoding approach that explicitly encourages semantic diversity during generation. ESamp is motivated by the well-known observation that neural networks tend to make lower-error predictions on inputs similar to those encountered before, and incur higher prediction error on novel ones. Building on this property, we train a lightweight Distiller at test time to predict deep-layer hidden representations of the LLM from its shallow-layer representations to model the LLM's depth-wise representation transitions. During decoding, the Distiller continuously adapts to the mappings induced by the current generation context. ESamp uses the prediction error as a novelty signal to reweight candidate token extensions conditioned on the current prefix, thereby biasing decoding toward less-explored semantic patterns. ESamp is implemented with an asynchronous training--inference pipeline, with less than 5% worst case overhead (1.2% in the optimized release). Empirical results show that ESamp significantly boosts the Pass@k efficiency of reasoning models, showing superior or comparable performance to strong stochastic and heuristic baselines. Notably, ESamp achieves robust generalization across mathematics, science, and code generation benchmarks and breaks the trade-off between diversity and coherence in creative writing. Our code has released at: https://github.com/LinesHogan/tLLM.

  3. RADIO-ViPE: Online Tightly Coupled Multi-Modal Fusion for Open-Vocabulary Semantic SLAM in Dynamic Environments

    We present RADIO-ViPE (Reduce All Domains Into One -- Video Pose Engine), an online semantic SLAM system that enables geometry-aware open-vocabulary grounding, associating arbitrary natural language queries with localized 3D regions and objects in dynamic environments. Unlike existing approaches that require calibrated, posed RGB-D input, RADIO-ViPE operates directly on raw monocular RGB video streams, requiring no prior camera intrinsics, depth sensors, or pose initialization. The system tightly couples multi-modal embeddings -- spanning vision and language -- derived from agglomerative foundation models (e.g., RADIO) with geometric scene information. This coupling takes place in initialization, optimization and factor graph connections to improve the consistency of the map from multiple modalities. The optimization is wrapped within adaptive robust kernels, designed to handle both actively moving objects and agent-displaced scene elements (e.g., furniture rearranged during ego-centric session). Experiments demonstrate that RADIO-ViPE achieves state-of-the-art results on the dynamic TUM-RGBD benchmark while maintaining competitive performance against offline open-vocabulary methods that rely on calibrated data and static scene assumptions. RADIO-ViPE bridges a critical gap in real-world deployment, enabling robust open-vocabulary semantic grounding for autonomous robotics and unconstrained in-the-wild video streams. Project page: https://be2rlab.github.io/radio_vipe

  4. ClawGym: A Scalable Framework for Building Effective Claw Agents

    Claw-style environments support multi-step workflows over local files, tools, and persistent workspace states. However, scalable development around these environments remains constrained by the absence of a systematic framework, especially one for synthesizing verifiable training data and integrating it with agent training and diagnostic evaluation. To address this challenge, we present ClawGym, a scalable framework that supports the full lifecycle of Claw-style personal agent development. Concretely, we construct ClawGym-SynData, a diverse dataset of 13.5K filtered tasks synthesized from persona-driven intents and skill-grounded operations, paired with realistic mock workspaces and hybrid verification mechanisms. We then train a family of capable Claw-style models, termed ClawGym-Agents, through supervised fine-tuning on black-box rollout trajectories, and further explore reinforcement learning via a lightweight pipeline that parallelizes rollouts across per-task sandboxes.To support reliable evaluation, we further construct ClawGym-Bench, a benchmark of 200 instances calibrated through automated filtering and human-LLM review. Relevant resources will be soon released at https://github.com/ClawGym.

  5. Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

    Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.

  6. Diffusion Templates: A Unified Plugin Framework for Controllable Diffusion

    Controllable diffusion methods have substantially expanded the practical utility of diffusion models, but they are typically developed as isolated, backbone-specific systems with incompatible training pipelines, parameter formats, and runtime hooks. This fragmentation makes it difficult to reuse infrastructure across tasks, transfer capabilities across backbones, or compose multiple controls within a single generation pipeline. We present Diffusion Templates, a unified and open plugin framework that decouples base-model inference from controllable capability injection. The framework is organized around three components: Template models that map arbitrary task-specific inputs to an intermediate capability representation, a Template cache that functions as a standardized interface for capability injection, and a Template pipeline that loads, merges, and injects one or more Template caches into the base diffusion runtime. Because the interface is defined at the systems level rather than tied to a specific control architecture, heterogeneous capability carriers such as KV-Cache and LoRA can be supported under the same abstraction. Based on this design, we build a diverse model zoo spanning structural control, brightness adjustment, color adjustment, image editing, super-resolution, sharpness enhancement, aesthetic alignment, content reference, local inpainting, and age control. These case studies show that Diffusion Templates can unify a broad range of controllable generation tasks while preserving modularity, composability, and practical extensibility across rapidly evolving diffusion backbones. All resources will be open sourced, including code, models, and datasets.

  7. Operating-Layer Controls for Onchain Language-Model Agents Under Real Capital

    We study reliability in autonomous language-model agents that translate user mandates into validated tool actions under real capital. The setting is DX Terminal Pro, a 21-day deployment in which 3,505 user-funded agents traded real ETH in a bounded onchain market. Users configured vaults through structured controls and natural-language strategies, but only agents could choose normal buy/sell trades. The system produced 7.5M agent invocations, roughly 300K onchain actions, about $20M in volume, more than 5,000 ETH deployed, roughly 70B inference tokens, and 99.9% settlement success for policy-valid submitted transactions. Long-running agents accumulated thousands of sequential decisions, including 6,000+ prompt-state-action cycles for continuously active agents, yielding a large-scale trace from user mandate to rendered prompt, reasoning, validation, portfolio state, and settlement. Reliability did not come from the base model alone; it emerged from the operating layer around the model: prompt compilation, typed controls, policy validation, execution guards, memory design, and trace-level observability. Pre-launch testing exposed failures that text-only benchmarks rarely measure, including fabricated trading rules, fee paralysis, numeric anchoring, cadence trading, and misread tokenomics. Targeted harness changes reduced fabricated sell rules from 57% to 3%, reduced fee-led observations from 32.5% to below 10%, and increased capital deployment from 42.9% to 78.0% in an affected test population. We show that capital-managing agents should be evaluated across the full path from user mandate to prompt, validated action, and settlement.

  8. Accelerating RL Post-Training Rollouts via System-Integrated Speculative Decoding

    RL post-training of frontier language models is increasingly bottlenecked by autoregressive rollout generation, making rollout acceleration a central systems challenge. Many existing efficiency methods improve throughput by changing the rollout or optimization regime, for example, through off-policy execution, replay, or lower-precision generation. We study speculative decoding as a lossless acceleration primitive for RL rollouts that preserves the target model's output distribution. We implement speculative decoding in NeMo-RL with a vLLM backend, supporting both synchronous and asynchronous pipelines and enabling speculation during RL rollouts. This benefit is realizable across speculation mechanisms, such as pretrained MTP heads, small external draft models or even techniques such as Eagle3, which are traditionally applied after RL phase. This yields a deployment path for state-of-the-art speculative decoding inside RL training. In a reasoning post-training workload at 8B scale under synchronous RL, speculative decoding improves rollout throughput by 1.8x. Using a high-fidelity performance simulator, we project that combining speculative decoding with asynchronous RL yields up to 2.5x end-to-end training speedup at 235B scale.

  9. Unified 4D World Action Modeling from Video Priors with Asynchronous Denoising

    We propose X-WAM, a Unified 4D World Model that unifies real-time robotic action execution and high-fidelity 4D world synthesis (video + 3D reconstruction) in a single framework, addressing the critical limitations of prior unified world models (e.g., UWM) that only model 2D pixel-space and fail to balance action efficiency and world modeling quality. To leverage the strong visual priors of pretrained video diffusion models, X-WAM imagines the future world by predicting multi-view RGB-D videos, and obtains spatial information efficiently through a lightweight structural adaptation: replicating the final few blocks of the pretrained Diffusion Transformer into a dedicated depth prediction branch for the reconstruction of future spatial information. Moreover, we propose Asynchronous Noise Sampling (ANS) to jointly optimize generation quality and action decoding efficiency. ANS applies a specialized asynchronous denoising schedule during inference, which rapidly decodes actions with fewer steps to enable efficient real-time execution, while dedicating the full sequence of steps to generate high-fidelity video. Rather than entirely decoupling the timesteps during training, ANS samples from their joint distribution to align with the inference distribution. Pretrained on over 5,800 hours of robotic data, X-WAM achieves 79.2% and 90.7% average success rate on RoboCasa and RoboTwin 2.0 benchmarks, while producing high-fidelity 4D reconstruction and generation surpassing existing methods in both visual and geometric metrics.

  10. FAMA: Failure-Aware Meta-Agentic Framework for Open-Source LLMs in Interactive Tool Use Environments

    Large Language Models are being increasingly deployed as the decision-making core of autonomous agents capable of effecting change in external environments. Yet, in conversational benchmarks, which simulate real-world customer-centric issue resolution scenarios, these agents frequently fail due to the cascading effects of incorrect decision-making. These challenges are particularly pronounced for open-source LLMs with smaller parameter sizes, limited context windows, and constrained inference budgets, which contribute to increased error accumulation in agentic settings. To tackle these challenges, we present the Failure-Aware Meta-Agentic (FAMA) framework. FAMA operates in two stages: first, it analyzes failure trajectories from baseline agents to identify the most prevalent errors; second, it employs an orchestration mechanism that activates a minimal subset of specialized agents tailored to address these failures by injecting a targeted context for the tool-use agent before the decision-making step. Experiments across open-source LLMs demonstrate performance gains up to 27% across evaluation modes over standard baselines. These results highlight that targeted curation of context through specialized agents to address common failures is a valuable design principle for building reliable, multi-turn tool-use LLM agents that simulate real-world conversational scenarios.

  11. A Survey on LLM-based Conversational User Simulation

    User simulation has long played a vital role in computer science due to its potential to support a wide range of applications. Language, as the primary medium of human communication, forms the foundation of social interaction and behavior. Consequently, simulating conversational behavior has become a key area of study. Recent advancements in large language models (LLMs) have significantly catalyzed progress in this domain by enabling high-fidelity generation of synthetic user conversation. In this paper, we survey recent advancements in LLM-based conversational user simulation. We introduce a novel taxonomy covering user granularity and simulation objectives. Additionally, we systematically analyze core techniques and evaluation methodologies. We aim to keep the research community informed of the latest advancements in conversational user simulation and to further facilitate future research by identifying open challenges and organizing existing work under a unified framework.

  12. Praxy Voice: Voice-Prompt Recovery + BUPS for Commercial-Class Indic TTS from a Frozen Non-Indic Base at Zero Commercial-Training-Data Cost

    Commercial TTS systems produce near-native Indic audio, but the best open-source bases (Chatterbox, Indic Parler-TTS, IndicF5) trail them on measured phonological dimensions, and the most widely adopted multilingual base (Chatterbox, 23 languages) does not even tokenise Telugu or Tamil. We ask: what is the minimum intervention that brings such a non-Indic-native base to commercial-class output on Telugu, Tamil, and Hindi, without training a new acoustic decoder and without any commercial TTS training data? We combine three pieces: (1) BUPS, a Brahmic Unified Phoneme Space that deterministically romanises seven Indic scripts to ISO-15919 so Chatterbox's Latin tokeniser can process them; (2) a LoRA adapter on only the text-token predictor (Chatterbox's t3), trained on ~1,220h of licensed Indic audio with a Hindi-proxy language_id; (3) a voice-prompt recovery recipe -- an 8-11s same-language reference clip plus three sampling overrides (exaggeration 0.7, temperature 0.6, min_p 0.1; "Config B") -- that recovers commercial-class acoustic output with no acoustic-decoder training. On Hindi, the LoRA regresses accuracy and we instead use vanilla Chatterbox + Config B, giving a two-branch deployment. Evaluated on 10-utterance pilot sets with the companion PSP benchmark, Praxy Voice matches or slightly leads commercial baselines: 26.7% retroflex collapse on Telugu (vs Sarvam Bulbul 33.3%), 71% Tamil-zha collapse (vs commercial trio's 86%), 0.025 LLM-WER on Hindi (tied with Cartesia Sonic-3). For intra-sentential code-mix we add a third branch (IndicF5 + native-script transliteration) that drops code-mix LLM-WER from 0.80-0.85 to 0.14-0.27 across Hi/Te/Ta. We release R6 LoRA weights (Apache-2.0), inference code and router (MIT), and a Gradio demo.

  13. PSP: An Interpretable Per-Dimension Accent Benchmark for Indic Text-to-Speech

    Standard text-to-speech (TTS) evaluation measures intelligibility (WER, CER) and overall naturalness (MOS, UTMOS) but does not quantify accent. A synthesiser may score well on all four yet sound non-native on features that are phonemic in the target language. For Indic languages, these features include retroflex articulation, aspiration, vowel length, and the Tamil retroflex approximant (letter zha). We present PSP, the Phoneme Substitution Profile, an interpretable, per-phonological-dimension accent benchmark for Indic TTS. PSP decomposes accent into six complementary dimensions: retroflex collapse rate (RR), aspiration fidelity (AF), vowel-length fidelity (LF), Tamil-zha fidelity (ZF), Frechet Audio Distance (FAD), and prosodic signature divergence (PSD). The first four are measured via forced alignment plus native-speaker-centroid acoustic probes over Wav2Vec2-XLS-R layer-9 embeddings; the latter two are corpus-level distributional distances. In this v1 we benchmark four commercial and open-source systems (ElevenLabs v3, Cartesia Sonic-3, Sarvam Bulbul, Indic Parler-TTS) on Hindi, Telugu, and Tamil pilot sets, with a fifth system (Praxy Voice) included on all three languages, plus an R5->R6 case study on Telugu. Three findings: (i) retroflex collapse grows monotonically with phonological difficulty Hindi < Telugu < Tamil (~1%, ~40%, ~68%); (ii) PSP ordering diverges from WER ordering -- commercial WER-leaders do not uniformly lead on retroflex or prosodic fidelity; (iii) no single system is Pareto-optimal across all six dimensions. We release native reference centroids (500 clips per language), 1000-clip embeddings for FAD, 500-clip prosodic feature matrices for PSD, 300-utterance golden sets per language, scoring code under MIT, and centroids under CC-BY. Formal MOS-correlation is deferred to v2; v1 reports five internal-consistency signals plus a native-audio sanity check.

  14. FASH-iCNN: Making Editorial Fashion Identity Inspectable Through Multimodal CNN Probing

    Fashion AI systems routinely encode the aesthetic logic of specific houses, editors, and historical moments without disclosing it. We present FASH-iCNN, a multimodal system trained on 87,547 Vogue runway images across 15 fashion houses spanning 1991-2024 that makes this cultural logic inspectable. Given a photograph of a garment, the system recovers which house produced it, which era it belongs to, and which color tradition it reflects. A clothing-only model identifies the fashion house at 78.2% top-1 across 14 houses, the decade at 88.6% top-1, and the specific year at 58.3% top-1 across 34 years with a mean error of just 2.2 years. Probing which visual channels carry this signal reveals a sharp dissociation: removing color costs only 10.6pp of house identity accuracy, while removing texture costs 37.6pp, establishing texture and luminance as the primary carriers of editorial identity. FASH-iCNN treats editorial culture as the signal rather than background noise, identifying which houses, eras, and color traditions shaped each output so that users can see not just what the system predicts but which houses, editors, and historical moments are encoded in that prediction.

  15. Sample Selection Using Multi-Task Autoencoders in Federated Learning with Non-IID Data

    Federated learning is a machine learning paradigm in which multiple devices collaboratively train a model under the supervision of a central server while ensuring data privacy. However, its performance is often hindered by redundant, malicious, or abnormal samples, leading to model degradation and inefficiency. To overcome these issues, we propose novel sample selection methods for image classification, employing a multitask autoencoder to estimate sample contributions through loss and feature analysis. Our approach incorporates unsupervised outlier detection, using one-class support vector machine (OCSVM), isolation forest (IF), and adaptive loss threshold (AT) methods managed by a central server to filter noisy samples on clients. We also propose a multi-class deep support vector data description (SVDD) loss controlled by a central server to enhance feature-based sample selection. We validate our methods on CIFAR10 and MNIST datasets across varying numbers of clients, non-IID distributions, and noise levels up to 40%. The results show significant accuracy improvements with loss-based sample selection, achieving gains of up to 7.02% on CIFAR10 with OCSVM and 1.83% on MNIST with AT. Additionally, our federated SVDD loss further improves feature-based sample selection, yielding accuracy gains of up to 0.99% on CIFAR10 with OCSVM. These results show the effectiveness of our methods in improving model accuracy across various client counts and noise conditions.

Techmeme(15)

  1. Sources: Meta HR chief Janelle Gale told employees she can't rule out further layoffs; Zuckerberg said AI automation is not the driving factor behind them (Business Insider)

    Business Insider : Sources: Meta HR chief Janelle Gale told employees she can't rule out further layoffs; Zuckerberg said AI automation is not the driving factor behind them —  - Meta previously announced it will cut 10% of its staff next month.  — Meta's HR chief told staff in a meeting that she can't promise further layoffs won't happen.

  2. Twilio reports Q1 revenue up 20% YoY to $1.41B, vs. $1.34B est., and forecasts Q2 revenue above estimates; TWLO jumps 17%+ after hours (Reinhardt Krause/Investor's Business ...)

    Reinhardt Krause / Investor's Business Daily : Twilio reports Q1 revenue up 20% YoY to $1.41B, vs. $1.34B est., and forecasts Q2 revenue above estimates; TWLO jumps 17%+ after hours —  Twilio (TWLO) stock popped after the communications software maker reported first-quarter earnings and revenue that topped consensus estimates.

  3. Atlassian reports Q3 revenue up 32% YoY to $1.79B, vs. $1.69B est., and raises its annual revenue forecast; TEAM jumps 17%+ after hours (Anhata Rooprai/Reuters)

    Anhata Rooprai / Reuters : Atlassian reports Q3 revenue up 32% YoY to $1.79B, vs. $1.69B est., and raises its annual revenue forecast; TEAM jumps 17%+ after hours —  Atlassian (TEAM.O) raised its annual revenue forecast on Thursday, betting that its investments in artificial-intelligence features and a push …

  4. Tim Cook says iPhone sales, which slightly missed Q2 estimates, were held back by chip supply constraints, as "demand was off the charts" (Reuters)

    Reuters : Tim Cook says iPhone sales, which slightly missed Q2 estimates, were held back by chip supply constraints, as “demand was off the charts” —  Apple (AAPL.O) on Thursday reported results that beat Wall Street estimates, with customers showing eagerness to buy a new MacBook model driven …

  5. Apple reports Q2 revenue from Services, which includes the App Store, Apple TV, Apple Music, and more, grew 16.3% YoY to $30.98B, beating estimates of $30.4B (Todd Spangler/Variety)

    Todd Spangler / Variety : Apple reports Q2 revenue from Services, which includes the App Store, Apple TV, Apple Music, and more, grew 16.3% YoY to $30.98B, beating estimates of $30.4B —  Apple hit another high note in the first three months of 2026, delivering an earnings beat driven by a surge in iPhone sales and continued momentum in its Services segment.

  6. Apple Q2: iPhone up 22% YoY to $56.99B, vs. $57.21B est., Mac up 6% to $8.4B, iPad up 8% to $6.91B, and Wearables, Home, and Accessories up 5% to $7.9B (Jennifer Elias/CNBC)

    Jennifer Elias / CNBC : Apple Q2: iPhone up 22% YoY to $56.99B, vs. $57.21B est., Mac up 6% to $8.4B, iPad up 8% to $6.91B, and Wearables, Home, and Accessories up 5% to $7.9B —  Apple issued a better-than-expected revenue forecast for the current period after beating on sales and earnings in the fiscal second quarter.

  7. Apple reports Q2 revenue up 17% YoY to $111.18B, vs. $109.66B est., net income up 19% to $29.6B, and EPS up 22% to $2.01, above $1.95 est. (Apple)

    Apple : Apple reports Q2 revenue up 17% YoY to $111.18B, vs. $109.66B est., net income up 19% to $29.6B, and EPS up 22% to $2.01, above $1.95 est. —  March quarter records for total company revenue, iPhone revenue, and EPS  —  Services revenue reaches new all-time high

  8. Roblox reports Q1 bookings up 43% YoY to $1.7B, vs. $1.73B est., and DAUs up 35% to 132M, below analysts' estimates of 143.8M; RBLX drops 16%+ after hours (Cecilia D'Anastasio/Bloomberg)

    Cecilia D'Anastasio / Bloomberg : Roblox reports Q1 bookings up 43% YoY to $1.7B, vs. $1.73B est., and DAUs up 35% to 132M, below analysts' estimates of 143.8M; RBLX drops 16%+ after hours —  Roblox Corp. reported first-quarter users that fell short of analysts' expectations after implementing safety features restricting how kids …

  9. Roku reports Q1 revenue up 22% YoY to $1.25B, ad revenue up 27% to $613M, subscription revenue up 30%, raises 2026 profit guidance; ROKU jumps 8%+ after hours (Todd Spangler/Variety)

    Todd Spangler / Variety : Roku reports Q1 revenue up 22% YoY to $1.25B, ad revenue up 27% to $613M, subscription revenue up 30%, raises 2026 profit guidance; ROKU jumps 8%+ after hours —  Roku beat Wall Street earnings forecasts for the first quarter of 2026 and raised its full-year profit guidance as the company continues …

  10. Reddit reports Q1 revenue up 69% YoY to $663M, vs. $611M est., DAUq up 17% to 126.8M, vs. 125.9M est., and forecasts Q2 revenue above estimates (Jonathan Vanian/CNBC)

    Jonathan Vanian / CNBC : Reddit reports Q1 revenue up 69% YoY to $663M, vs. $611M est., DAUq up 17% to 126.8M, vs. 125.9M est., and forecasts Q2 revenue above estimates —  Reddit reported better-than-expected profit and revenue in its first-quarter earnings report on Thursday, and also issued an optimistic forecast.

  11. North Korea-linked hackers stole ~$577M across the Drift Protocol and KelpDAO hacks in April, accounting for 76% of total crypto hack losses so far in 2026 (TRM Insights)

    TRM Insights : North Korea-linked hackers stole ~$577M across the Drift Protocol and KelpDAO hacks in April, accounting for 76% of total crypto hack losses so far in 2026 —  Key takeaways  — North Korean hackers, from two distinct groups, stole approximately USD 577 million in 2026 YTD …

  12. Filing: Meta says it might be forced to withdraw its apps from New Mexico if a judge orders it to adopt the state's proposed safety features (Thomas Barrabi/New York Post)

    Thomas Barrabi / New York Post : Filing: Meta says it might be forced to withdraw its apps from New Mexico if a judge orders it to adopt the state's proposed safety features —  Mark Zuckerberg's Meta is threatening a total shutdown of Facebook and Instagram in New Mexico if a state judge orders the company to adopt new safety features …

  13. Google is rolling out Gemini to cars that have Google built in, replacing Google Assistant, starting with English in the US (Jess Weatherbed/The Verge)

    Jess Weatherbed / The Verge : Google is rolling out Gemini to cars that have Google built in, replacing Google Assistant, starting with English in the US —  The current Google Assistant is being replaced with a smarter, more conversational upgrade. … Google is preparing to update vehicles that have Google built-in with its Gemini AI assistant.

  14. Musk v. Altman: the judge told Musk's lawyer she did not want talk of AI's existential threat seeping into the trial, focusing instead on OpenAI's founding (New York Times)

    New York Times : Musk v. Altman: the judge told Musk's lawyer she did not want talk of AI's existential threat seeping into the trial, focusing instead on OpenAI's founding —  Since he had a testy fireside chat about artificial intelligence with the Google co-founder Larry Page more than a decade ago …

  15. Musk v. Altman: when asked whether xAI has distilled OpenAI models, Elon Musk says the claim is "partly" true (Wired)

    Wired : Musk v. Altman: when asked whether xAI has distilled OpenAI models, Elon Musk says the claim is “partly” true —  While answering questions under oath, Musk argued it's standard practice for AI labs to use their competitors' models.  —  While testifying on Thursday in federal court …

Solidot(15)

  1. Google 给你贴上的价格标签

    瑞士邮件服务商 Proton 利用 2025 年广告竞价数据,分析了逾 54,000 个人口画像,估算广告商为触达不同类美国人所支付的价格。结果显示不同人之间的价格差距远超想象。美国人平均每年产生的广告价值约 1,605 美元;一名居住在蒙大拿州 Bozeman 市、年龄 35-44 岁之间、无子女、用台式机进行高价值企业搜索的男性,其广告价值估计为 17,929.30 美元;一位居住在阿肯色州 Fort Smith 市、年龄在 18-24 岁之间、用 Android 手机进行低价值搜索的父亲,其广告价值仅为 31.05 美元。1,605 美元的平均值与 760 美元的中位数显示,少数高价值用户拉高了平均值,而此类商业模式依赖于高价值用户。分析显示,无子女用户的广告价值比有子女用户平均高出约 17%,一旦某个用户被标记为有子女,针对他们的广告投放会从每次点击 6 美元的财富管理广告转向每次点击 2 美元的面包车和幼儿园广告。台式机用户的价值是 Android 用户的 4.9 倍,苹果 iPhone 用户的价值是 Android 用户的 2.7 倍。用户年龄在 35-44 岁之间时广告价值最高,65 岁后广告价值下降——虽然老年用户价值下降,但针对他们的广告则属于高消费类别如医保补充保险、药品和金融产品。老年人的总体价值降低,但广告商的投放力度更精准。为什么蒙大拿州 Bozeman 市居民的广告价值高?因为大量远程科技工作者的涌入和户外休闲消费使其成为全美竞争最激烈的本地广告市场之一。

  2. 亚洲多国加大燃煤发电以应对能源危机

    最新的中东能源危机促使亚洲国家加大燃煤发电,而煤炭是高污染排放来源,如果这一趋势继续,全球气候变化问题将会愈发严峻。印度宣布推迟对国内燃煤电厂的维护检查。国际能源署(IEA)的数据显示,截至 2023 年,印度发电中煤炭占 74%。石油和天然气合计约占 3%,来自中东的采购存在制约,印度通过增加煤炭火力来避免停电风险。泰国电力公司重启原计划停用的 2 座燃煤机组。韩国暂时解除了以发电能力 80% 为上限的煤炭火电站的运行限制,推迟原定于 6 月关闭的两座火电站的关闭时间。日本也将提高煤炭火力发电站的开工率。孟加拉国则增加煤炭的供应来源。全球最大的发电用煤炭出口国印尼计划上调原定为 6 亿吨的 2026 年煤炭生产计划。第二大出口国澳大利亚政府也计划扩大煤炭生产。

  3. 活动邀请 | NVIDIA 开发者见面会:从基础设施到智能体,全链路专家深度解析

    从底层的 GPU 开发,到长上下文的大模型推理,再到能够自主规划的 Agentic AI,AI 技术的演进正全方位重塑软件开发的范式。为了帮助开发者更好地应对日益复杂的全栈挑战,NVIDIA 企业开发者社区诚邀您参加即将在苏州举办的 NVIDIA 开发者见面会。本次见面会将汇集来自 NVIDIA 全球及本地的技术专家,为您带来从基础设施优化到前沿智能体应用的全链路干货分享。  查看全文

  4. 水产养殖的温室气体排放

    发表在《Frontiers of Agricultural Science and Engineering》期刊上的一项研究发现,水产养殖的温室气体排放主要来自四个环节:饲料生产、养殖过程中的能源消耗、池塘或水体中的生物化学过程(如甲烷和氧化亚氮的释放),以及土地利用变化和基础设施建设。其中饲料生产是大多数投饵型养殖系统中最大的排放源,在我国的研究中占比达到 52%。而在我国等以淡水池塘养殖为主的地区,甲烷排放尤为突出,贡献了约 90% 的养殖系统温室气体排放。不同水产养殖物种之间的排放差异显著。例如不依赖投饵的双壳贝类(如牡蛎、蛤蜊)和海藻养殖,排放极低甚至为负值,反而能通过碳固定起到“碳汇”作用。而草食性或杂食性鱼类(如鲤鱼、罗非鱼)在适度养殖强度下排放也相对较低。相比之下,集约化养殖的肉食性鱼类(如鲑鱼、鳟鱼)和虾类由于饲料和能源需求高,碳排放强度显著上升,部分甚至与陆地畜牧业相当。

  5. 微软公开 86-DOS 1.00 源代码

    2018 年微软公开了 MS-DOS 1.25 和 2.11 源代码,2024 年公开了 MS-DOS 4.0 源代码,2026 年 4 月在 86-DOS 1.00 发布 45 周年之际它延续传统公开了 86-DOS 1.00 源代码。86-DOS 的作者是 Tim Paterson,它后来成为 MS-DOS 的基础。发布在 GitHub 上的内容包括了 86-DOS 1.00 内核源代码、内核的多个快照,以及知名工具 CHKDSK 等。

  6. 基因组学先驱 Craig Venter 去世,享年 79 岁

    基因组学先驱 Craig Venter 去世,享年 79 岁。他因在 1990 年代末与人类基因组计划竞争建立一个基因组数据库而闻名,但他的数据库是设想要付费才能访问,因此在科学界并不受欢迎,促使其他科研团队加速公开发布基因组测序结果。2000 年由美国总统克林顿牵手,人类基因组计划和 Venter 创办的 Celera Genomics 公司同意所有人类基因组数据为人类共同财富,不允许专利保护,且对所有研究者公开。

  7. GCC 17 加入对海光 C86-4G CPU 的支持

    GCC 编译器项目合并了支持海光 C86-4G CPU 的补丁。海光最早是与 AMD 合作的半导体企业,授权提供 AMD Zen 1 CPU的本地化版本,其产品仅供国内市场使用。海光去年五月宣布与中科曙光合并,但年底宣布合并计划终止。 C86-4G 为 16 核/32 线程处理器,其性能接近英特尔的 Raptor Lake CPU,支持 DDR5 和 PCIe Gen 5。海光声称 C86-4G 利用了自主研发的新微架构,但仅从 GCC 补丁看它仍然与 AMD Zen 有许多相似之处。C86-4G 包括了 C86-4G-M4 / C86-4G-M6 / C86-4G-M7 系列,其中 C86-4G-M7 支持 AVX-512 指令集。

  8. Zed 编辑器发布 1.0 版本

    用 Rust 开发的文本编辑器项目 Zed 宣布发布 1.0 版本。开发者表示 1.0 版本并不意味着“完成”或“完美”,而是意味着到达了一个关键点。开发者还宣称 Zed 编辑器是一个 AI 原生编辑器,能并行运行多个 AI 智能体,包括 Claude Agent、Codex、OpenCode,以及 Cursor。AI 构建在编辑器的基础架构之中,而不是附加组件。

  9. 城里的鸟更怕女性,原因未知

    根据发表在《People and Nature》期刊上的一项研究,欧洲大山雀和其它 36 种鸟更惧怕女性,原因未知。研究显示,在鸟儿飞走前男性能比女性多靠近大约一米距离。无论男女衣着,是否长发,身高如何,以何种方式接近鸟,这种现象始终存在。鸟类或许能辨别人类的性别,但具体机制并不清楚。研究人员观察了生活在五个欧洲国家城市中心的鸟类,其中包括已知一见到人类就会飞走的鸟类如喜鹊,以及倾向于稍迟才飞走的鸟类如鸽子。对女性表现出过度恐惧的反应在不同鸟类中间是一致的。研究人员猜测鸟可能通过气味或步态辨别性别。

  10. .icu 域名被短暂劫持

    中国网民报告 .icu 顶级域名的权威服务器在短时间内解析到了错误的 IP。.icu 最知名的域名可能是 996.ICU。不同于个别域名被错误解析,域名服务器被错误解析可能会导致影响扩大化。4 月 28 日 6:22 a.m. UTC 进行的测试显示,.icu 权威服务器之一的 b.nic.icu 在 58.13% 的情况下解析到正确 IP,其余解析到污染 IP。到 16:00 UTC 问题基本解决,99.38% 解析到正确 IP,仅 0.63% 返回污染 IP。

  11. 荷兰政府上线开源代码托管平台

    荷兰政府上线了它自己的开源代码托管平台 code.overheid.nl。该平台是基于 Forgejo——Forgejo 是 Gitea 的分支,是一个类似 GitHub 的 Git 软件开发和版本控制平台,支持 bug 跟踪、Wiki 和代码审查等功能。该平台托管于荷兰政府基础设施之上,所有政府机构都可以免费使用。荷兰政府表示此举是加强数字主权行动的一部分。

  12. 报告称逾三分之二婴儿使用屏幕,最多花 8 小时在屏幕上

    一份报告发现,逾三分之二两岁以下婴儿使用屏幕,少数婴儿每天最多花 8 小时在屏幕上。近三分之一新生儿每天看屏幕的时间超过三小时,4-11 个月大的婴儿近 20% 每天使用屏幕逾一小时。已有证据表明,屏幕时间与儿童肥胖、近视、睡眠和行为问题风险,以及长大后社交问题等负面影响相关联。新生儿父母允许婴儿使用屏幕的一大理由是分散他们注意力,让自己有时间做其它家务活或工作。

  13. 马斯克称他创办非盈利的 OpenAI 是为了对抗 Google

    2024 年马斯克(Elon Musk)向旧金山高等法院起诉 OpenAI 及其联合创始人 Sam Altman 和 Greg Brockman 违反公司的创始原则,将商业利益置于公共利益之上。OpenAI 则公开了马斯克的邮件,证明作为曾经的联合创始人,马斯克同意 OpenAI 建立一个盈利实体,还表示将提供资金,但之后暂停了资金支持,他的目的是获得多数股权和董事会控制权,双方最终因此终止了合作。本周这起诉讼正式进入审讯阶段,马斯克在法庭上作证,称创办 OpenAI 是将其作为一家非盈利公司去对抗 Google,如果 OpenAI 的目标是盈利他不会支持它。马斯克称他在与 Google 联合创始人 Larry Page 就 AI 安全问题上发生争执后萌生了创办非盈利 AI 公司的想法。他担心 Page 没有认真对待 AI 安全问题,因此希望通过一个非盈利的开源替代方案去对抗 Google。

  14. 打呵欠有助于大脑清理脑液

    打哈欠是一种常见的人类行为,可能是感觉疲劳或无聊,或者是看到同伴打哈欠的反射行为。但科学家至今并不清楚人类为什么要打呵欠。根据发表在《Respiratory Physiology & Neurobiology》期刊上的一项研究,澳大利亚研究人员利用实时 MRI 扫描技术,观察了打哈欠时头部和颈部内部的变化,与正常呼吸和深呼吸进行对比。结果显示,打哈欠会触发一种特定的脑脊液和静脉血同时流出颅骨,而深呼吸时脑脊液则会流入颅骨。脑脊液充当了缓冲,能保护大脑和脊髓免受损伤,还能帮助输送营养物质和排出垃圾。研究表明打哈欠有助于清除脑脊液,其作用很可能是发生在临睡前。这项研究为理解打哈欠的生理功提供了一条新途径。

  15. 食肉细菌在三天内就破坏了男子的手臂和腿

    食肉细菌短短三天内就破坏了一位 74 岁佛罗里达男子的手臂和腿。三天前他还身体健康,还在海边活动,但在跳入水中时他的右腿被划伤了,伤口很快疼痛难忍,而且右臂的颜色也变了。他被送往医院急救,其右腿进行了膝上截肢手术。医生对其血液和组织样本的检测发现他感染了创伤弧菌(Vibrio vulnificus),一种生活在温暖咸淡水的食肉细菌。创伤弧菌通过两种途径感染人类:其一是通过伤口接触受污染的水,其二是食用受污染的海鲜。创伤弧菌会释放出各种毒素杀死感染者,感染者的总死亡率高达 35%,如果存在免疫功能问题或肝脏疾病,死亡率会进一步提高到 50%-60%。如果抗生素治疗或坏死组织切除手术延误,死亡率将是 100%。在本病例中,该男子最终幸存。该病例凸显了创伤弧菌在气候变化下日益加剧的威胁。美国 CDC 建议只食用完全煮熟的海鲜,以及在身上有创口的情况下避免进入咸淡水。