DIGEST · 2026-04-01

OrangeBot.AI Digest — 2026-04-01

82 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs (github.com)
  2. NASA Artemis II moon mission live launch broadcast (plus.nasa.gov)
  3. The OpenAI graveyard: All the deals and products that haven't happened (www.forbes.com)
  4. Ask HN: Who is hiring? (April 2026)
  5. EmDash – a spiritual successor to WordPress that solves plugin security (blog.cloudflare.com)
  6. OpenAI demand sinks on secondary market as Anthropic runs hot (www.bloomberg.com)
  7. Is BGP safe yet? (isbgpsafeyet.com)
  8. I Quit. The Clankers Won (dbushell.com)
  9. Intuiting Pratt Parsing (louis.co.nz)
  10. Claude wrote a full FreeBSD remote kernel RCE with root shell (github.com)
  11. CERN levels up with new superconducting karts (home.cern)
  12. Show HN: CLI to order groceries via reverse-engineered REWE API (Haskell) (github.com)
  13. Claude Code Unpacked : A visual guide (ccunpacked.dev)
  14. Chess in SQL (www.dbpro.app)
  15. My son pleasured himself on Gemini Live. Entire family's Google accounts banned (old.reddit.com)

GitHub Trending(7)

  1. anthropics / claude-code
  2. microsoft / VibeVoice
  3. google-research / timesfm
  4. luongnv89 / claude-howto
  5. axios / axios
  6. openai / codex
  7. f / prompts.chat

Product Hunt(15)

  1. flock

    Run a flock of Claude Code (or other agents) in one window.

  2. Ray-Ban Meta G2 Blayzer & Scriber Optics

    Meta's first AI glasses built for prescriptions

  3. Ditch

    App cleaner that lives in your MacBook’s notch

  4. Noiz Easter Voice

    Crack an Easter egg to generate an AI voice

  5. EDAMAME Security

    Axios / LiteLLM hacks behavioral detector app for Mac/PC

  6. Audyr

    AI captures feedback and tells you what to build next

  7. Claudoscope

    Browse, search & track costs across Claude Code sessions

  8. OpenBox

    See, verify, and govern every agent action.

  9. The New White House App

    Get direct, unfiltered access to the People's House

  10. Remodex

    Control Codex on your iPhone

  11. IdeaBoard95

    A community idea board that looks like Windows 95

  12. Stiinks.co

    Link-in-bio, but worse.

  13. Ollama v0.19

    Massive local model speedup on Apple Silicon with MLX

  14. Netwoke

    Your Network has Secrets, Now you can Them.

  15. zero

    One command to deploy Docker containers to your own server

Hugging Face(15)

  1. FIPO: Eliciting Deep Reasoning with Future-KL Influenced Policy Optimization

    We present Future-KL Influenced Policy Optimization (FIPO), a reinforcement learning algorithm designed to overcome reasoning bottlenecks in large language models. While GRPO style training scales effectively, it typically relies on outcome-based rewards (ORM) that distribute a global advantage uniformly across every token in a trajectory. We argue that this coarse-grained credit assignment imposes a performance ceiling by failing to distinguish critical logical pivots from trivial tokens. FIPO addresses this by incorporating discounted future-KL divergence into the policy update, creating a dense advantage formulation that re-weights tokens based on their influence on subsequent trajectory behavior. Empirically, FIPO enables models to break through the length stagnation seen in standard baselines. Evaluated on Qwen2.5-32B, FIPO extends the average chain-of-thought length from roughly 4,000 to over 10,000 tokens and increases AIME 2024 Pass@1 accuracy from 50.0% to a peak of 58.0% (converging at approximately 56.0\%). This outperforms both DeepSeek-R1-Zero-Math-32B (around 47.0%) and o1-mini (approximately 56.0%). Our results suggest that establishing dense advantage formulations is a vital path for evolving ORM-based algorithms to unlock the full reasoning potential of base models. We open-source our training system, built on the verl framework.

  2. CARLA-Air: Fly Drones Inside a CARLA World -- A Unified Infrastructure for Air-Ground Embodied Intelligence

    The convergence of low-altitude economies, embodied intelligence, and air-ground cooperative systems creates growing demand for simulation infrastructure capable of jointly modeling aerial and ground agents within a single physically coherent environment. Existing open-source platforms remain domain-segregated: driving simulators lack aerial dynamics, while multirotor simulators lack realistic ground scenes. Bridge-based co-simulation introduces synchronization overhead and cannot guarantee strict spatial-temporal consistency. We present CARLA-Air, an open-source infrastructure that unifies high-fidelity urban driving and physics-accurate multirotor flight within a single Unreal Engine process. The platform preserves both CARLA and AirSim native Python APIs and ROS 2 interfaces, enabling zero-modification code reuse. Within a shared physics tick and rendering pipeline, CARLA-Air delivers photorealistic environments with rule-compliant traffic, socially-aware pedestrians, and aerodynamically consistent UAV dynamics, synchronously capturing up to 18 sensor modalities across all platforms at each tick. The platform supports representative air-ground embodied intelligence workloads spanning cooperation, embodied navigation and vision-language action, multi-modal perception and dataset construction, and reinforcement-learning-based policy training. An extensible asset pipeline allows integration of custom robot platforms into the shared world. By inheriting AirSim's aerial capabilities -- whose upstream development has been archived -- CARLA-Air ensures this widely adopted flight stack continues to evolve within a modern infrastructure. Released with prebuilt binaries and full source: https://github.com/louiszengCN/CarlaAir

  3. LongCat-Next: Lexicalizing Modalities as Discrete Tokens

    The prevailing Next-Token Prediction (NTP) paradigm has driven the success of large language models through discrete autoregressive modeling. However, contemporary multimodal systems remain language-centric, often treating non-linguistic modalities as external attachments, leading to fragmented architectures and suboptimal integration. To transcend this limitation, we introduce Discrete Native Autoregressive (DiNA), a unified framework that represents multimodal information within a shared discrete space, enabling a consistent and principled autoregressive modeling across modalities. A key innovation is the Discrete Native Any-resolution Visual Transformer (dNaViT), which performs tokenization and de-tokenization at arbitrary resolutions, transforming continuous visual signals into hierarchical discrete tokens. Building on this foundation, we develop LongCat-Next, a native multimodal model that processes text, vision, and audio under a single autoregressive objective with minimal modality-specific design. As an industrial-strength foundation model, it excels at seeing, painting, and talking within a single framework, achieving strong performance across a wide range of multimodal benchmarks. In particular, LongCat-Next addresses the long-standing performance ceiling of discrete vision modeling on understanding tasks and provides a unified approach to effectively reconcile the conflict between understanding and generation. As an attempt toward native multimodality, we open-source the LongCat-Next and its tokenizers, hoping to foster further research and development in the community. GitHub: https://github.com/meituan-longcat/LongCat-Next

  4. Lingshu-Cell: A generative cellular world model for transcriptome modeling toward virtual cells

    Modeling cellular states and predicting their responses to perturbations are central challenges in computational biology and the development of virtual cells. Existing foundation models for single-cell transcriptomics provide powerful static representations, but they do not explicitly model the distribution of cellular states for generative simulation. Here, we introduce Lingshu-Cell, a masked discrete diffusion model that learns transcriptomic state distributions and supports conditional simulation under perturbation. By operating directly in a discrete token space that is compatible with the sparse, non-sequential nature of single-cell transcriptomic data, Lingshu-Cell captures complex transcriptome-wide expression dependencies across approximately 18,000 genes without relying on prior gene selection, such as filtering by high variability or ranking by expression level. Across diverse tissues and species, Lingshu-Cell accurately reproduces transcriptomic distributions, marker-gene expression patterns and cell-subtype proportions, demonstrating its ability to capture complex cellular heterogeneity. Moreover, by jointly embedding cell type or donor identity with perturbation, Lingshu-Cell can predict whole-transcriptome expression changes for novel combinations of identity and perturbation. It achieves leading performance on the Virtual Cell Challenge H1 genetic perturbation benchmark and in predicting cytokine-induced responses in human PBMCs. Together, these results establish Lingshu-Cell as a flexible cellular world model for in silico simulation of cell states and perturbation responses, laying the foundation for a new paradigm in biological discovery and perturbation screening.

  5. GEMS: Agent-Native Multimodal Generation with Memory and Skills

    Recent multimodal generation models have achieved remarkable progress on general-purpose generation tasks, yet continue to struggle with complex instructions and specialized downstream tasks. Inspired by the success of advanced agent frameworks such as Claude Code, we propose GEMS (Agent-Native Multimodal GEneration with Memory and Skills), a framework that pushes beyond the inherent limitations of foundational models on both general and downstream tasks. GEMS is built upon three core components. Agent Loop introduces a structured multi-agent framework that iteratively improves generation quality through closed-loop optimization. Agent Memory provides a persistent, trajectory-level memory that hierarchically stores both factual states and compressed experiential summaries, enabling a global view of the optimization process while reducing redundancy. Agent Skill offers an extensible collection of domain-specific expertise with on-demand loading, allowing the system to effectively handle diverse downstream applications. Across five mainstream tasks and four downstream tasks, evaluated on multiple generative backends, GEMS consistently achieves significant performance gains. Most notably, it enables the lightweight 6B model Z-Image-Turbo to surpass the state-of-the-art Nano Banana 2 on GenEval2, demonstrating the effectiveness of agent harness in extending model capabilities beyond their original limits.

  6. Project Imaging-X: A Survey of 1000+ Open-Access Medical Imaging Datasets for Foundation Model Development

    Foundation models have demonstrated remarkable success across diverse domains and tasks, primarily due to the thrive of large-scale, diverse, and high-quality datasets. However, in the field of medical imaging, the curation and assembling of such medical datasets are highly challenging due to the reliance on clinical expertise and strict ethical and privacy constraints, resulting in a scarcity of large-scale unified medical datasets and hindering the development of powerful medical foundation models. In this work, we present the largest survey to date of medical image datasets, covering over 1,000 open-access datasets with a systematic catalog of their modalities, tasks, anatomies, annotations, limitations, and potential for integration. Our analysis exposes a landscape that is modest in scale, fragmented across narrowly scoped tasks, and unevenly distributed across organs and modalities, which in turn limits the utility of existing medical image datasets for developing versatile and robust medical foundation models. To turn fragmentation into scale, we propose a metadata-driven fusion paradigm (MDFP) that integrates public datasets with shared modalities or tasks, thereby transforming multiple small data silos into larger, more coherent resources. Building on MDFP, we release an interactive discovery portal that enables end-to-end, automated medical image dataset integration, and compile all surveyed datasets into a unified, structured table that clearly summarizes their key characteristics and provides reference links, offering the community an accessible and comprehensive repository. By charting the current terrain and offering a principled path to dataset consolidation, our survey provides a practical roadmap for scaling medical imaging corpora, supporting faster data discovery, more principled dataset creation, and more capable medical foundation models.

  7. VGGRPO: Towards World-Consistent Video Generation with 4D Latent Reward

    Large-scale video diffusion models achieve impressive visual quality, yet often fail to preserve geometric consistency. Prior approaches improve consistency either by augmenting the generator with additional modules or applying geometry-aware alignment. However, architectural modifications can compromise the generalization of internet-scale pretrained models, while existing alignment methods are limited to static scenes and rely on RGB-space rewards that require repeated VAE decoding, incurring substantial compute overhead and failing to generalize to highly dynamic real-world scenes. To preserve the pretrained capacity while improving geometric consistency, we propose VGGRPO (Visual Geometry GRPO), a latent geometry-guided framework for geometry-aware video post-training. VGGRPO introduces a Latent Geometry Model (LGM) that stitches video diffusion latents to geometry foundation models, enabling direct decoding of scene geometry from the latent space. By constructing LGM from a geometry model with 4D reconstruction capability, VGGRPO naturally extends to dynamic scenes, overcoming the static-scene limitations of prior methods. Building on this, we perform latent-space Group Relative Policy Optimization with two complementary rewards: a camera motion smoothness reward that penalizes jittery trajectories, and a geometry reprojection consistency reward that enforces cross-view geometric coherence. Experiments on both static and dynamic benchmarks show that VGGRPO improves camera stability, geometry consistency, and overall quality while eliminating costly VAE decoding, making latent-space geometry-guided reinforcement an efficient and flexible approach to world-consistent video generation.

  8. Unify-Agent: A Unified Multimodal Agent for World-Grounded Image Synthesis

    Unified multimodal models provide a natural and promising architecture for understanding diverse and complex real-world knowledge while generating high-quality images. However, they still rely primarily on frozen parametric knowledge, which makes them struggle with real-world image generation involving long-tail and knowledge-intensive concepts. Inspired by the broad success of agents on real-world tasks, we explore agentic modeling to address this limitation. Specifically, we present Unify-Agent, a unified multimodal agent for world-grounded image synthesis, which reframes image generation as an agentic pipeline consisting of prompt understanding, multimodal evidence searching, grounded recaptioning, and final synthesis. To train our model, we construct a tailored multimodal data pipeline and curate 143K high-quality agent trajectories for world-grounded image synthesis, enabling effective supervision over the full agentic generation process. We further introduce FactIP, a benchmark covering 12 categories of culturally significant and long-tail factual concepts that explicitly requires external knowledge grounding. Extensive experiments show that our proposed Unify-Agent substantially improves over its base unified model across diverse benchmarks and real world generation tasks, while approaching the world knowledge capabilities of the strongest closed-source models. As an early exploration of agent-based modeling for world-grounded image synthesis, our work highlights the value of tightly coupling reasoning, searching, and generation for reliable open-world agentic image synthesis.

  9. CutClaw: Agentic Hours-Long Video Editing via Music Synchronization

    Editing the video content with audio alignment forms a digital human-made art in current social media. However, the time-consuming and repetitive nature of manual video editing has long been a challenge for filmmakers and professional content creators alike. In this paper, we introduce CutClaw, an autonomous multi-agent framework designed to edit hours-long raw footage into meaningful short videos that leverages the capabilities of multiple Multimodal Language Models~(MLLMs) as an agent system. It produces videos with synchronized music, followed by instructions, and a visually appealing appearance. In detail, our approach begins by employing a hierarchical multimodal decomposition that captures both fine-grained details and global structures across visual and audio footage. Then, to ensure narrative consistency, a Playwriter Agent orchestrates the whole storytelling flow and structures the long-term narrative, anchoring visual scenes to musical shifts. Finally, to construct a short edited video, Editor and Reviewer Agents collaboratively optimize the final cut via selecting fine-grained visual content based on rigorous aesthetic and semantic criteria. We conduct detailed experiments to demonstrate that CutClaw significantly outperforms state-of-the-art baselines in generating high-quality, rhythm-aligned videos. The code is available at: https://github.com/GVCLab/CutClaw.

  10. daVinci-LLM:Towards the Science of Pretraining

    The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.

  11. MonitorBench: A Comprehensive Benchmark for Chain-of-Thought Monitorability in Large Language Models

    Large language models (LLMs) can generate chains of thought (CoTs) that are not always causally responsible for their final outputs. When such a mismatch occurs, the CoT no longer faithfully reflects the decision-critical factors driving the model's behavior, leading to the reduced CoT monitorability problem. However, a comprehensive and fully open-source benchmark for studying CoT monitorability remains lacking. To address this gap, we propose MonitorBench, a systematic benchmark for evaluating CoT monitorability in LLMs. MonitorBench provides: (1) a diverse set of 1,514 test instances with carefully designed decision-critical factors across 19 tasks spanning 7 categories to characterize when CoTs can be used to monitor the factors driving LLM behavior; and (2) two stress-test settings to quantify the extent to which CoT monitorability can be degraded. Extensive experiments across multiple popular LLMs with varying capabilities show that CoT monitorability is higher when producing the final target response requires structural reasoning through the decision-critical factor. Closed-source LLMs generally show lower monitorability, and there exists a negative relationship between monitorability and model capability. Moreover, both open- and closed-source LLMs can intentionally reduce monitorability under stress-tests, with monitorability dropping by up to 30% in some tasks that do not require structural reasoning over the decision-critical factors. Beyond these empirical insights, MonitorBench provides a basis for further research on evaluating future LLMs, studying advanced stress-test monitorability techniques, and developing new monitoring approaches.

  12. Extend3D: Town-Scale 3D Generation

    In this paper, we propose Extend3D, a training-free pipeline for 3D scene generation from a single image, built upon an object-centric 3D generative model. To overcome the limitations of fixed-size latent spaces in object-centric models for representing wide scenes, we extend the latent space in the x and y directions. Then, by dividing the extended latent space into overlapping patches, we apply the object-centric 3D generative model to each patch and couple them at each time step. Since patch-wise 3D generation with image conditioning requires strict spatial alignment between image and latent patches, we initialize the scene using a point cloud prior from a monocular depth estimator and iteratively refine occluded regions through SDEdit. We discovered that treating the incompleteness of 3D structure as noise during 3D refinement enables 3D completion via a concept, which we term under-noising. Furthermore, to address the sub-optimality of object-centric models for sub-scene generation, we optimize the extended latent during denoising, ensuring that the denoising trajectories remain consistent with the sub-scene dynamics. To this end, we introduce 3D-aware optimization objectives for improved geometric structure and texture fidelity. We demonstrate that our method yields better results than prior methods, as evidenced by human preference and quantitative experiments.

  13. FlowPIE: Test-Time Scientific Idea Evolution with Flow-Guided Literature Exploration

    Scientific idea generation (SIG) is critical to AI-driven autonomous research, yet existing approaches are often constrained by a static retrieval-then-generation paradigm, leading to homogeneous and insufficiently divergent ideas. In this work, we propose FlowPIE, a tightly coupled retrieval-generation framework that treats literature exploration and idea generation as a co-evolving process. FlowPIE expands literature trajectories via a flow-guided Monte Carlo Tree Search (MCTS) inspired by GFlowNets, using the quality of current ideas assessed by an LLM-based generative reward model (GRM) as a supervised signal to guide adaptive retrieval and construct a diverse, high-quality initial population. Based on this population, FlowPIE models idea generation as a test-time idea evolution process, applying selection, crossover, and mutation with the isolation island paradigm and GRM-based fitness computation to incorporate cross-domain knowledge. It effectively mitigates the information cocoons arising from over-reliance on parametric knowledge and static literature. Extensive evaluations demonstrate that FlowPIE consistently produces ideas with higher novelty, feasibility and diversity compared to strong LLM-based and agent-based frameworks, while enabling reward scaling during test time.

  14. Think Anywhere in Code Generation

    Recent advances in reasoning Large Language Models (LLMs) have primarily relied on upfront thinking, where reasoning occurs before final answer. However, this approach suffers from critical limitations in code generation, where upfront thinking is often insufficient as problems' full complexity only reveals itself during code implementation. Moreover, it cannot adaptively allocate reasoning effort throughout the code generation process where difficulty varies significantly. In this paper, we propose Think-Anywhere, a novel reasoning mechanism that enables LLMs to invoke thinking on-demand at any token position during code generation. We achieve Think-Anywhere by first teaching LLMs to imitate the reasoning patterns through cold-start training, then leveraging outcome-based RL rewards to drive the model's autonomous exploration of when and where to invoke reasoning. Extensive experiments on four mainstream code generation benchmarks (i.e., LeetCode, LiveCodeBench, HumanEval, and MBPP) show that Think-Anywhere achieves state-of-the-art performance over both existing reasoning methods and recent post-training approaches, while demonstrating consistent generalization across diverse LLMs. Our analysis further reveals that Think-Anywhere enables the model to adaptively invoke reasoning at high-entropy positions, providing enhanced interpretability.

  15. BizGenEval: A Systematic Benchmark for Commercial Visual Content Generation

    Recent advances in image generation models have expanded their applications beyond aesthetic imagery toward practical visual content creation. However, existing benchmarks mainly focus on natural image synthesis and fail to systematically evaluate models under the structured and multi-constraint requirements of real-world commercial design tasks. In this work, we introduce BizGenEval, a systematic benchmark for commercial visual content generation. The benchmark spans five representative document types: slides, charts, webpages, posters, and scientific figures, and evaluates four key capability dimensions: text rendering, layout control, attribute binding, and knowledge-based reasoning, forming 20 diverse evaluation tasks. BizGenEval contains 400 carefully curated prompts and 8000 human-verified checklist questions to rigorously assess whether generated images satisfy complex visual and semantic constraints. We conduct large-scale benchmarking on 26 popular image generation systems, including state-of-the-art commercial APIs and leading open-source models. The results reveal substantial capability gaps between current generative models and the requirements of professional visual content creation. We hope BizGenEval serves as a standardized benchmark for real-world commercial visual content generation.

Techmeme(15)

  1. Solana-based DeFi platform Drift warns users about an "active attack" on its protocol; Arkham data said over $250M had moved from Drift to an interim wallet (Helene Braun/CoinDesk)

    Helene Braun / CoinDesk : Solana-based DeFi platform Drift warns users about an “active attack” on its protocol; Arkham data said over $250M had moved from Drift to an interim wallet —  The platform halted deposits while it investigates suspicious activity and urges users to proceed with caution.

  2. Cognichip, which is building an AI model for chip design, raised a $60M Series A led by Seligman Ventures, with participation from new board member Lip-Bu Tan (Tim Fernholz/TechCrunch)

    Tim Fernholz / TechCrunch : Cognichip, which is building an AI model for chip design, raised a $60M Series A led by Seligman Ventures, with participation from new board member Lip-Bu Tan —  The most advanced silicon chips have accelerated the development of artificial intelligence.  Now, can AI return the favor?

  3. Sources: the FBI has declared a recent China-linked hack of a system, which contained pen register and trap and trace surveillance returns, a "major incident" (John Sakellariadis/Politico)

    John Sakellariadis / Politico : Sources: the FBI has declared a recent China-linked hack of a system, which contained pen register and trap and trace surveillance returns, a “major incident” —  The determination suggests the hackers successfully compromised swathes of sensitive data stored directly on FBI systems …

  4. Franklin Templeton agrees to acquire CoinFund spinoff 250 Digital to form Franklin Crypto, which will offer strategies designed for institutional investors (Vicky Ge Huang/Wall Street Journal)

    Vicky Ge Huang / Wall Street Journal : Franklin Templeton agrees to acquire CoinFund spinoff 250 Digital to form Franklin Crypto, which will offer strategies designed for institutional investors —  Money manager's crypto investment unit will offer strategies designed for institutional investors

  5. Super Micro co-founder Yih-Shyan Liaw pleads not guilty to US federal charges of helping smuggle billions of dollars' worth of Nvidia-powered servers to China (Bob Van Voris/Bloomberg)

    Bob Van Voris / Bloomberg : Super Micro co-founder Yih-Shyan Liaw pleads not guilty to US federal charges of helping smuggle billions of dollars' worth of Nvidia-powered servers to China —  Super Micro Computer Inc. co-founder Yih-Shyan “Wally” Liaw pleaded not guilty Wednesday in New York to charges …

  6. Raspberry Pi raises prices by between $11.25 and $150, citing higher memory costs after December and February hikes, and unveils a 3GB Raspberry Pi 4 for $83.75 (Stevie Bonifield/The Verge)

    Stevie Bonifield / The Verge : Raspberry Pi raises prices by between $11.25 and $150, citing higher memory costs after December and February hikes, and unveils a 3GB Raspberry Pi 4 for $83.75 —  Prices are going up by over $100 in some cases thanks to those AI fools. … As of today, the price of the 16GB version …

  7. Sona, which uses AI to help companies with scheduling, HR, payroll, and other workflows, raised a $45M Series B led by N47, bringing its total funding to $100M+ (Chris Metinko/Axios)

    Chris Metinko / Axios : Sona, which uses AI to help companies with scheduling, HR, payroll, and other workflows, raised a $45M Series B led by N47, bringing its total funding to $100M+ —  Sona, which helps companies with workforce management, raised a $45 million Series B led by N47, co-founders Steffen Wulff Petersen …

  8. Sources: Paradigm, a major investor in Kalshi, is building its own prediction markets trading terminal that will cater to professional traders and market makers (Ben Weiss/Fortune)

    Ben Weiss / Fortune : Sources: Paradigm, a major investor in Kalshi, is building its own prediction markets trading terminal that will cater to professional traders and market makers —  One of the most influential investors in crypto wants a bigger slice of the burgeoning prediction markets space.

  9. Sources: OpenRouter is in talks to raise $120M led by CapitalG at a $1.3B post-money valuation; it now has $50M+ in annualized revenue, up from $10M+ in Oct. (The Information)

    The Information : Sources: OpenRouter is in talks to raise $120M led by CapitalG at a $1.3B post-money valuation; it now has $50M+ in annualized revenue, up from $10M+ in Oct. —  As more AI apps and agents shift to using multiple AI models, startups that help developers choose the right ones are gaining traction.

  10. Sources: SpaceX has filed confidentially for an IPO, putting it on track for a June listing; it could reportedly seek a valuation of $1.75T+ and raise ~$75B (Bloomberg)

    Bloomberg : Sources: SpaceX has filed confidentially for an IPO, putting it on track for a June listing; it could reportedly seek a valuation of $1.75T+ and raise ~$75B —  SpaceX has filed confidentially for an initial public offering, according to people familiar with the matter …

  11. Source: AWS' operation in Bahrain was damaged after an Iranian strike; Bahrain earlier said the civil defence force was "extinguishing a fire in a facility" (Financial Times)

    Financial Times : Source: AWS' operation in Bahrain was damaged after an Iranian strike; Bahrain earlier said the civil defence force was “extinguishing a fire in a facility” —  US president says he would not consider the move unless Strait of Hormuz is reopened  —  Some content could not load.

  12. SEC filing: Hasbro confirms a cyberattack and says it may take "several weeks" before the incident is resolved, after it detected an intrusion on March 28 (Zack Whittaker/TechCrunch)

    Zack Whittaker / TechCrunch : SEC filing: Hasbro confirms a cyberattack and says it may take “several weeks” before the incident is resolved, after it detected an intrusion on March 28 —  American toy-making giant Hasbro has confirmed a cyberattack, and the company says it may take “several weeks” before the incident is resolved.

  13. Secondary share marketplaces say OpenAI shares have fallen out of favor, in some cases becoming difficult to unload, as investors pivot quickly to Anthropic (Hema Parmar/Bloomberg)

    Hema Parmar / Bloomberg : Secondary share marketplaces say OpenAI shares have fallen out of favor, in some cases becoming difficult to unload, as investors pivot quickly to Anthropic —  OpenAI shares have fallen out of favor on the secondary market — in some cases becoming almost impossible to unload …

  14. Anthropic's Claude Code leak reveals its "Kairos" updates, including letting Claude work in the background and using a "dream mode" to consolidate its memories (The Information)

    The Information : Anthropic's Claude Code leak reveals its “Kairos” updates, including letting Claude work in the background and using a “dream mode” to consolidate its memories —  Anthropic's cybersecurity employees have probably spent the past week scolding their colleagues.

  15. US kid safety groups say they didn't know OpenAI had entirely funded the Parents & Kids Safe AI Coalition to promote CA legislation until after it was announced (Emily Shugerman/The San Francisco ...)

    Emily Shugerman / The San Francisco Standard : US kid safety groups say they didn't know OpenAI had entirely funded the Parents & Kids Safe AI Coalition to promote CA legislation until after it was announced —  In mid-March, organizers for child safety groups across the country received emails from an organization called …

Solidot(15)

  1. 欧洲国家快速拥抱绿色技术和电动汽车

    因霍尔木兹海峡封锁推高世界各地的油气价格,欧洲多国转向了绿色技术购买了更多电动汽车。数据显示,3 月前三周英国热泵销量较上月同期增长 51%,太阳能销量增长 54%,电动汽车充电器销量增长 20%。法国二手车在线零售商 Aramisauto 的电动汽车销量在 2 月中旬到 3 月 9 日期间几乎翻了一番。阿姆斯特丹二手车交易平台 Olx 表示它在法国、罗马尼亚、葡萄牙和波兰的平台上电动汽车的客户咨询量激增。挪威最大二手车交易平台 Finn.no 上电动汽车销量超过了柴油车。

  2. 百度多辆无人驾驶出租车同时发生故障

    百度旗下的萝卜快跑在武汉运营无人驾驶出租车服务,本周二 3 月 31 日 20 时左右无人出租车在路上集体趴窝。根据社交媒体平台上广泛流传的现场照片和视频,故障的萝卜快跑无人出租车不仅有停在路边的,还有停在马路中间的,甚至还有停在高架路的,部分乘客被困车内逾一小时。武汉交警称,初步判断为系统故障所致。交警表示这起事故中无人受伤,所有乘客均安全下车。目前不清楚有多少百度无人驾驶出租车受到影响。社交网络的照片和视频显示,突然停车的无人出租车至少造成多起追尾事故。有武汉网民称看到至少十几辆无人出租车趴窝。百度尚未对事故进行说明。

  3. 瑞典回归传统的基于纸质的课堂教育模式

    数字化教育和社交媒体给儿童和青少年带来的问题最近几年日益受到关注和争论。瑞典和其它国家一样,过去几十年逐渐放弃纸质书籍,切换到平板电脑和数字资源,试图让学生为网络世界做好准备。但数字化教育的争议最终促使瑞典在 2023 年宣布回归传统的基于纸质的课堂教育模式,纸质书籍重新回到课堂,学生开始学习用铅笔或钢笔在纸上手写,以最传统的方式进行学习。瑞典政府还计划在全国范围内推行学校禁止使用手机政策。这标志着瑞典教育模式的重大转变。瑞典官员强调,学校不会完全摒弃数字技术。数字辅助工具主要用于帮助高年级年龄段学生进行学习。

  4. 尼安德特人在灭绝边缘生存了 35 万年

    从 40 万到 4.5 万年前,尼安德特人独自占据了欧亚大陆的大部分地区,狩猎大型动物、采集植物、熟练制造石器,用兽皮制作衣物。但他们的生存状况岌岌可危。两项新研究表明,尼安德特人以地理上相距甚远的小群体形式生存,经历了严重的近亲繁殖,7.5 万年前濒临灭绝。近亲繁殖被广泛认为不利于适应环境变化。但如果环境在较长的时间内维持稳定,近亲繁殖的群体仍然能长时间生存下来。研究人员报告,7.5 万年前,尼安德特人的遗址和骨骼遗骸广泛分布于整个欧洲大陆,​​其基因组相对多样化。但 7.5 万至 6.5 万年前的冰河时期遗址的数量减少,6 万年前所有遗传多样性全部消失只剩一条单一的谱系。4.5 万年前气候再次波动,加上现代人类抵达欧亚大陆,尼安德特人有效种群数量在三千年内急剧下降,在大约 4.2 万年前达到最低点,之后彻底消失。

  5. 过去一年最活跃的勒索软件组织是 Qilin

    从 2025 年 3 月到 2026 年 3 月,勒索软件组织在 376 天内发布了 7,655 条受害者组织名单,平均每天约 20 条,或每 71 分钟新增一个受害者组织。对受害者名单的分析显示,在 129 个活跃的勒索软件组织中排名前五的组织发布了 3,027 条受害者名单,占总数的 40%。其中最活跃的是麒麟(Qilin),发布了 1,179 条占总数的 15.4%;其次是 Akira 的 706 条占 9.2%,INC Ransom 的 415 条 占5.4%,Play 的 386 条占 5.0%,Safepay 的 341 条占 4.5%。麒麟的受害者遍及 74 个国家,最主要受害者是美国 (438)、法国 (55)、加拿大 (48)、西班牙 (41)、英国 (36)。Akira 和 Play 的受害者都主要集中在美国。麒麟组织最初的名字叫 Agenda,2022 年 9 月改名为麒麟,沿用至今。

  6. 口服药有助于消除时差反应

    只靠吃药就能消除时差反应的未来或将到来?日本大阪大学等研究团队在美国科学杂志上发布研究成果称,通过实验发现了能加快生物钟的化合物。该团队确认,对处于时差犯困状态的实验鼠投入该化合物后,会比通常情况更早恢复,该团队称“也有望发展为药物”。时差反应的起因是调控人体内生物钟的时钟基因发生偏差。人类的生物钟周期为 25 小时,会因日光刺激而重置。研究团队发现的名为“Mic-628”的化合物仅对时钟基因产生作用。该化合物能同时激活脑中枢中控制全身的基因和位于肺等末端器官中的基因。实验中发现,与向实验鼠投药时间无关,无论何时都能将生物钟提前 2 小时。将溶有该化合物的液体给予有 6 小时时差反应的实验鼠后,通常恢复需要约 7 天的天数缩短至 4 天。

  7. 俄罗斯加大力度打击 VPN

    俄罗斯数字部长 Maksut Shadayev 通过官方支持的消息应用 MAX 宣布政府将加大力度打击 VPN。有数以百万计的俄罗斯人通过 VPN 绕过封锁和审查。俄罗斯此前已经多次封锁移动互联网,干扰主要消息应用 WhatsApp 和 Telegram 的服务。Shadayev 称最新的任务是减少 VPN 使用。但封锁 VPN 就像猫捉老鼠的游戏,封锁一个用户会切换到另一个,俄罗斯年轻人每天都在更换 VPN。

  8. 一颗 Starlink 卫星在轨道上突然解体

    SpaceX 证实它的一颗 Starlink 卫星上周日失去联系。对于这颗卫星解体成数十块碎片 SpaceX 没有使用“爆炸”一词去形容。根据利用地面雷达网络跟踪近地轨道物体的 LeoLabs 公司的观察,编号为 34343 的 Starlink 卫星解体是因为“内部的高能源”而非太空碎片等导致的碰撞。卫星高度大约为 560 公里,因此其碎片会在数周内脱离轨道进入大气层烧毁。SpaceX Starlink 表示正在调查事故原因。LeoLabs 认为最新事件与 2025 年 12 月 17 日发生的 Starlink 卫星解体事件相似,后者也是内部高能源导致而非与其它太空物体发生碰撞。

  9. 科技公司 CEO 热衷以 AI 为借口大规模裁员

    今天的科技巨头在大规模裁员时越来越多的将 AI 作为借口。Google、亚马逊、Meta 等科技巨头,以及Pinterest 和 Atlassian 等中等规模的科技公司最近几周都宣布或警告即将裁员,称 AI 的进步让企业能以更少的人手完成更多的工作。科技投资者 Terrence Rohan 表示,以 AI 为借口听起来比成本压力或取悦股东要好得多,强调 AI 让 CEO 们看起来不像是为了开源节流而裁员的坏人。Rohan 认为将 AI 作为借口并不是完全没有道理。企业越来越多的在 AI 工具的帮助下写代码,AI 工具已经足够好,能用更少的人完成相同的工作,它对曾经的高薪职位如程序员和软件工程师构成了威胁。AI 导致裁员的另一个因素是科技巨头竞相向 AI 数据中心投资数千亿美元,CEO 们需要寻找方法缓解投资者对巨额投资的担忧,削减成本最终体现在削减员工薪水上面。

  10. 非洲研究显示气温超过 20 C 与男胎流产率上升相关

    发表在 PNAS 期刊上的一项研究分析了撒哈拉以南非洲 33 个国家近 300 万例分娩,发现孕妇怀孕早期暴露于 20 摄氏度以上的气温会增加流产风险,受到影响的主要是男胎。尽管 20 摄氏度不算特别热,但研究人员表示,他们的分析表明存在一个阈值效应:一旦温度超过该值,男婴出生率开始下降。更极端的气温没有导致相应的更大的变化,表明只有当热超过特定生物应激点时效应才会触发。这项研究反映了一个长期以来广泛认可的生物学原理——“男性体质虚弱假说”——该假说认为男胎在妊娠期间更容易受到压力影响,从而导致更高的流产率。

  11. 甲骨文裁员约 3 万人

    甲骨文于周二早上 6 点发送邮件,宣布裁员 2-3 万人,约占员工总数的 18%。美国、印度等地的员工表示,他们在几乎同一时间收到了解雇通知,邮件发送者为“Oracle Leadership”。HR 部门没有事先警告,没有与直接主管沟通,没有任何形式的提前通知,只有一封电子邮件。受影响员工在 Reddit 的 r/employeesOfOracle 和 Blind 上发布了邮件截图。邮件内容非常简短,只是通知员工在评估当前的业务需求后,决定裁减部分岗位。受影响最大的部门是 RHS (Revenue and Health Sciences)、SVOS (SaaS and Virtual Operations Services) 以及印度研发中心 NetSuite India Development Centre。

  12. 考古学家发现有 3500 年历史的织布机

    大约 3500 年前,位于今天西班牙 Villena 的青铜时代定居点 Cabezo Redondo,一场大火将民居和作坊夷为平地,但坍塌房屋的残骸也意外保存了一台木结构织布机。织机配重以及由木材及植物纤维制成的部件均保存完好。该织布机是欧洲青铜时代“纺织革命”的一部分,其特征是纺织生产的技术进步和经济变革。Cabezo Redondo 定居点的定居时间是公元前 2100 年至公元前 1250 年,面积一公顷。民居依山而建,呈梯田状,有工作台、壁炉、筒仓和储藏容器。对动植物遗骸的分析表明,当地经济以集约化农业为主。

  13. Claude Code 源码泄漏

    Anthropic 公司开发的 AI 编程工具 Claude Code 在发布到 npm 上时不小心通过一个映射文件泄漏了未混淆的源代码,源代码被提取出来之后被上传到了 GitHub 等平台。用户发现, Claude Code 使用了正则表达式检测用户提示词中的负面情绪。使用正则表达式去检测情绪比调用大模型要快得多也能显著节省算力。

  14. 微软计划为 Windows 11 构建更多原生应用

    在内存价格飙升的时代,微软开始考虑降低应用的内存占用,计划为 Windows 11 构建更多原生应用而非资源消耗巨大的 Web 应用。从事应用商店和文件资源管理器相关工作的微软合作伙伴架构师(Partner Architect)Rudy Huyn 表示正组建一个团队开发原生 Windows 应用。微软最近推出的工具如 Clipchamp 和 Copilot 都是基于 Web 技术和 Progressive Web App 架构,容易开发但占用了更多资源。

  15. AI 数据中心周围地区温度最高上升 9.1C

    剑桥大学的研究人员利用卫星数据测量了过去 20 年的地表温度,交叉对比 8400 多个 AI 数据中心的地理坐标。考虑到地表温度可能受到其它因素的影响,研究人员将重点放在远离人口密集区的数据中心之上。研究人员发现,AI 数据中心投入运营后几个月内,地表温度平均升高了 2C。最极端的情况下温度升幅高达 9.1C。升温影响不仅限于数据中心周围区域。研究团队发现,升温影响能覆盖数据中心 10 公里范围,在距离数据中心 7 公里处温度升幅仅降低 30%。研究人员利用人口数据估计逾 3.4 亿人居住在数据中心 10 公里范围内,意味着他们所处的环境比没有数据中心的地方温度更高。研究人员对结果表示惊讶,认为数据中心的环境影响将是一个大问题。