DIGEST · 2026-04-26

OrangeBot.AI Digest — 2026-04-26

88 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. AI should elevate your thinking, not replace it (www.koshyjohn.com)
  2. Waymo says can't avoid bike lanes because riders want to be dropped off in them (road.cc)
  3. GoDaddy gave a domain to a stranger without any documentation (anchor.host)
  4. An AI agent deleted our production database. The agent's confession is below (twitter.com)
  5. Clay PCB Tutorial (feministhackerspaces.cargo.site)
  6. GitHub unwanted UX change: issue links now open in a popup (github.com)
  7. SWE-bench Verified no longer measures frontier coding capabilities (openai.com)
  8. Asahi Linux Progress Linux 7.0 (asahilinux.org)
  9. Show HN: Turning a Gaussian Splat into a videogame (blog.playcanvas.com)
  10. Statecharts: hierarchical state machines (statecharts.dev)
  11. The West forgot how to make things, now it’s forgetting how to code (techtrenches.dev)
  12. GnuPG – post-quantum crypto landing in mainline (lists.gnupg.org)
  13. EU Age Control: The trojan horse for digital IDs (juraj.bednar.io)
  14. Mahjong: A Visual Guide (themahjong.guide)
  15. The Super Nintendo Cartridges (2024) (fabiensanglard.net)

GitHub Trending(13)

  1. mattpocock / skills
  2. Alishahryar1 / free-claude-code
  3. Z4nzu / hackingtool
  4. abhigyanpatwari / GitNexus
  5. PostHog / posthog
  6. microsoft / typescript-go
  7. trycua / cua
  8. gastownhall / beads
  9. curl / curl
  10. home-assistant / core
  11. codecrafters-io / build-your-own-x
  12. openclaw / openclaw
  13. ComposioHQ / awesome-codex-skills

Product Hunt(15)

  1. Claude Connectors

    New connectors in Claude for everyday life

  2. Edgee Team

    Strava for your coding assistants

  3. Happenstance

    Search your network with AI

  4. QuickCompare by Trismik

    Compare LLMs on your data, measure, and pick the best.

  5. GPT-5.5 by OpenAI

    OpenAI's smartest and most intuitive to use model yet

  6. Pica

    Fully native app for managing your fonts on MacOS

  7. Free chart generator by Embedful

    Turn CSV & Excel files into charts in seconds

  8. Inrō AI

    Your AI Agent for Instagram Marketing

  9. ZeroHuman.

    Your AI Co-Founder: OpenClaw x Paperclip x Spud

  10. XChat

    The standalone, encrypted messaging app from X

  11. Gemini Personal Intelligence

    Gemini answers with context from your Google apps

  12. Architecto

    Design, review, and document cloud architecture with AI

  13. PromptPaste

    Your private AI prompt library on Mac, iPhone, and iPad

  14. Genspark for Excel

    AI assistant for Excel formulas, charts, insights.

  15. Euphony

    Render AI chat data and Codex logs into browsable views

Hugging Face(15)

  1. LLaTiSA: Towards Difficulty-Stratified Time Series Reasoning from Visual Perception to Semantics

    Comprehensive understanding of time series remains a significant challenge for Large Language Models (LLMs). Current research is hindered by fragmented task definitions and benchmarks with inherent ambiguities, precluding rigorous evaluation and the development of unified Time Series Reasoning Models(TSRMs). To bridge this gap, we formalize Time Series Reasoning (TSR) via a four-level taxonomy of increasing cognitive complexity. We introduce HiTSR, a hierarchical time series reasoning dataset comprising 83k samples with diverse task combinations and verified Chain-of-Thought (CoT) trajectories. Leveraging HiTSR, we propose LLaTiSA, a strong TSRM that integrates visualized patterns with precision-calibrated numerical tables to enhance the temporal perception of Vision-Language Models (VLMs). Through a multi-stage curriculum fine-tuning strategy, LLaTiSA achieves superior performance and exhibits robust out-of-distribution generalization across diverse TSR tasks and real-world scenarios. Our code is available at https://github.com/RainingNovember/LLaTiSA.

  2. WorldMark: A Unified Benchmark Suite for Interactive Video World Models

    Interactive video generation models such as Genie, YUME, HY-World, and Matrix-Game are advancing rapidly, yet every model is evaluated on its own benchmark with private scenes and trajectories, making fair cross-model comparison impossible. Existing public benchmarks offer useful metrics such as trajectory error, aesthetic scores, and VLM-based judgments, but none supplies the standardized test conditions -- identical scenes, identical action sequences, and a unified control interface -- needed to make those metrics comparable across models with heterogeneous inputs. We introduce WorldMark, the first benchmark that provides such a common playing field for interactive Image-to-Video world models. WorldMark contributes: (1) a unified action-mapping layer that translates a shared WASD-style action vocabulary into each model's native control format, enabling apples-to-apples comparison across six major models on identical scenes and trajectories; (2) a hierarchical test suite of 500 evaluation cases covering first- and third-person viewpoints, photorealistic and stylized scenes, and three difficulty tiers from Easy to Hard spanning 20-60s; and (3) a modular evaluation toolkit for Visual Quality, Control Alignment, and World Consistency, designed so that researchers can reuse our standardized inputs while plugging in their own metrics as the field evolves. We will release all data, evaluation code, and model outputs to facilitate future research. Beyond offline metrics, we launch World Model Arena (warena.ai), an online platform where anyone can pit leading world models against each other in side-by-side battles and watch the live leaderboard.

  3. UniT: Toward a Unified Physical Language for Human-to-Humanoid Policy Learning and World Modeling

    Scaling humanoid foundation models is bottlenecked by the scarcity of robotic data. While massive egocentric human data offers a scalable alternative, bridging the cross-embodiment chasm remains a fundamental challenge due to kinematic mismatches. We introduce UniT (Unified Latent Action Tokenizer via Visual Anchoring), a framework that establishes a unified physical language for human-to-humanoid transfer. Grounded in the philosophy that heterogeneous kinematics share universal visual consequences, UniT employs a tri-branch cross-reconstruction mechanism: actions predict vision to anchor kinematics to physical outcomes, while vision reconstructs actions to filter out irrelevant visual confounders. Concurrently, a fusion branch synergies these purified modalities into a shared discrete latent space of embodiment-agnostic physical intents. We validate UniT across two paradigms: 1) Policy Learning (VLA-UniT): By predicting these unified tokens, it effectively leverages diverse human data to achieve state-of-the-art data efficiency and robust out-of-distribution (OOD) generalization on both humanoid simulation benchmark and real-world deployments, notably demonstrating zero-shot task transfer. 2) World Modeling (WM-UniT): By aligning cross-embodiment dynamics via unified tokens as conditions, it realizes direct human-to-humanoid action transfer. This alignment ensures that human data seamlessly translates into enhanced action controllability for humanoid video generation. Ultimately, by inducing a highly aligned cross-embodiment representation (empirically verified by t-SNE visualizations revealing the convergence of human and humanoid features into a shared manifold), UniT offers a scalable path to distill vast human knowledge into general-purpose humanoid capabilities.

  4. StyleID: A Perception-Aware Dataset and Metric for Stylization-Agnostic Facial Identity Recognition

    Creative face stylization aims to render portraits in diverse visual idioms such as cartoons, sketches, and paintings while retaining recognizable identity. However, current identity encoders, which are typically trained and calibrated on natural photographs, exhibit severe brittleness under stylization. They often mistake changes in texture or color palette for identity drift or fail to detect geometric exaggerations. This reveals the lack of a style-agnostic framework to evaluate and supervise identity consistency across varying styles and strengths. To address this gap, we introduce StyleID, a human perception-aware dataset and evaluation framework for facial identity under stylization. StyleID comprises two datasets: (i) StyleBench-H, a benchmark that captures human same-different verification judgments across diffusion- and flow-matching-based stylization at multiple style strengths, and (ii) StyleBench-S, a supervision set derived from psychometric recognition-strength curves obtained through controlled two-alternative forced-choice (2AFC) experiments. Leveraging StyleBench-S, we fine-tune existing semantic encoders to align their similarity orderings with human perception across styles and strengths. Experiments demonstrate that our calibrated models yield significantly higher correlation with human judgments and enhanced robustness for out-of-domain, artist drawn portraits. All of our datasets, code, and pretrained models are publicly available at https://kwanyun.github.io/StyleID_page/

  5. Co-Evolving LLM Decision and Skill Bank Agents for Long-Horizon Tasks

    Long horizon interactive environments are a testbed for evaluating agents skill usage abilities. These environments demand multi step reasoning, the chaining of multiple skills over many timesteps, and robust decision making under delayed rewards and partial observability. Games are a good testbed for evaluating agent skill usage in environments. Large Language Models (LLMs) offer a promising alternative as game playing agents, but they often struggle with consistent long horizon decision making because they lack a mechanism to discover, retain, and reuse structured skills across episodes. We present COSPLAY, a co evolution framework in which an LLM decision agent retrieves skills from a learnable skill bank to guide action taking, while an agent managed skill pipeline discovers reusable skills from the agents unlabeled rollouts to form a skill bank. Our framework improves both the decision agent to learn better skill retrieval and action generation, while the skill bank agent continually extracts, refines, and updates skills together with their contracts. Experiments across six game environments show that COSPLAY with an 8B base model achieves over 25.1 percent average reward improvement against four frontier LLM baselines on single player game benchmarks while remaining competitive on multi player social reasoning games.

  6. Seeing Fast and Slow: Learning the Flow of Time in Videos

    How can we tell whether a video has been sped up or slowed down? How can we generate videos at different speeds? Although videos have been central to modern computer vision research, little attention has been paid to perceiving and controlling the passage of time. In this paper, we study time as a learnable visual concept and develop models for reasoning about and manipulating the flow of time in videos. We first exploit the multimodal cues and temporal structure naturally present in videos to learn, in a self-supervised manner, to detect speed changes and estimate playback speed. We then show that these learned temporal reasoning models enable us to curate the largest slow-motion video dataset to date from noisy in-the-wild sources. Such slow-motion footage, typically filmed by high-speed cameras, contains substantially richer temporal detail than standard videos. Using this data, we further develop models capable of temporal control, including speed-conditioned video generation, which produces motion at specified playback speed, and temporal super-resolution, which tranforms low-FPS, blurry videos into high-FPS sequences with fine-grained temporal details. Our findings highlight time as a manipulable, perceptual dimension in video learning, opening doors to temporally controllable video generation, temporal forensics detection, and potentially richer world-models that understand how events unfold over time.

  7. VLAA-GUI: Knowing When to Stop, Recover, and Search, A Modular Framework for GUI Automation

    Autonomous GUI agents face two fundamental challenges: early stopping, where agents prematurely declare success without verifiable evidence, and repetitive loops, where agents cycle through the same failing actions without recovery. We present VLAA-GUI, a modular GUI agentic framework built around three integrated components that guide the system on when to Stop, Recover, and Search. First, a mandatory Completeness Verifier enforces UI-observable success criteria and verification at every finish step -- with an agent-level verifier that cross-examines completion claims with decision rules, rejecting those lacking direct visual evidence. Second, a mandatory Loop Breaker provides multi-tier filtering: switching interaction mode after repeated failures, forcing strategy changes after persistent screen-state recurrence, and binding reflection signals to strategy shifts. Third, an on-demand Search Agent searches online for unfamiliar workflows by directly querying a capable LLM with search ability, returning results as plain text. We additionally integrate a Coding Agent for code-intensive actions and a Grounding Agent for precise action grounding, both invoked on demand when required. We evaluate VLAA-GUI across five top-tier backbones, including Opus 4.5, 4.6 and Gemini 3.1 Pro, on two benchmarks with Linux and Windows tasks, achieving top performance on both (77.5% on OSWorld and 61.0% on WindowsAgentArena). Notably, three of the five backbones surpass human performance (72.4%) on OSWorld in a single pass. Ablation studies show that all three proposed components consistently improve a strong backbone, while a weaker backbone benefits more from these tools when the step budget is sufficient. Further analysis also shows that the Loop Breaker nearly halves wasted steps for loop-prone models.

  8. Hybrid Policy Distillation for LLMs

    Knowledge distillation (KD) is a powerful paradigm for compressing large language models (LLMs), whose effectiveness depends on intertwined choices of divergence direction, optimization strategy, and data regime. We break down the design of existing KD methods and present a unified view that establishes connections between them, reformulating KD as a reweighted log-likelihood objective at the token level. We further propose Hybrid Policy Distillation (HPD), which integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, and combines off-policy data with lightweight, approximate on-policy sampling. We validate HPD on long-generation math reasoning as well as short-generation dialogue and code tasks, demonstrating improved optimization stability, computational efficiency, and final performance across diverse model families and scales. The code related to this work is available at https://github.com/zwhong714/Hybrid-Policy-Distillation.

  9. TingIS: Real-time Risk Event Discovery from Noisy Customer Incidents at Enterprise Scale

    Real-time detection and mitigation of technical anomalies are critical for large-scale cloud-native services, where even minutes of downtime can result in massive financial losses and diminished user trust. While customer incidents serve as a vital signal for discovering risks missed by monitoring, extracting actionable intelligence from this data remains challenging due to extreme noise, high throughput, and semantic complexity of diverse business lines. In this paper, we present TingIS, an end-to-end system designed for enterprise-grade incident discovery. At the core of TingIS is a multi-stage event linking engine that synergizes efficient indexing techniques with Large Language Models (LLMs) to make informed decisions on event merging, enabling the stable extraction of actionable incidents from just a handful of diverse user descriptions. This engine is complemented by a cascaded routing mechanism for precise business attribution and a multi-dimensional noise reduction pipeline that integrates domain knowledge, statistical patterns, and behavioral filtering. Deployed in a production environment handling a peak throughput of over 2,000 messages per minute and 300,000 messages per day, TingIS achieves a P90 alert latency of 3.5 minutes and a 95\% discovery rate for high-priority incidents. Benchmarks constructed from real-world data demonstrate that TingIS significantly outperforms baseline methods in routing accuracy, clustering quality, and Signal-to-Noise Ratio.

  10. Vista4D: Video Reshooting with 4D Point Clouds

    We present Vista4D, a robust and flexible video reshooting framework that grounds the input video and target cameras in a 4D point cloud. Specifically, given an input video, our method re-synthesizes the scene with the same dynamics from a different camera trajectory and viewpoint. Existing video reshooting methods often struggle with depth estimation artifacts of real-world dynamic videos, while also failing to preserve content appearance and failing to maintain precise camera control for challenging new trajectories. We build a 4D-grounded point cloud representation with static pixel segmentation and 4D reconstruction to explicitly preserve seen content and provide rich camera signals, and we train with reconstructed multiview dynamic data for robustness against point cloud artifacts during real-world inference. Our results demonstrate improved 4D consistency, camera control, and visual quality compared to state-of-the-art baselines under a variety of videos and camera paths. Moreover, our method generalizes to real-world applications such as dynamic scene expansion and 4D scene recomposition. See our project page for results, code, and models: https://eyeline-labs.github.io/Vista4D

  11. EditCrafter: Tuning-free High-Resolution Image Editing via Pretrained Diffusion Model

    We propose EditCrafter, a high-resolution image editing method that operates without tuning, leveraging pretrained text-to-image (T2I) diffusion models to process images at resolutions significantly exceeding those used during training. Leveraging the generative priors of large-scale T2I diffusion models enables the development of a wide array of novel generation and editing applications. Although numerous image editing methods have been proposed based on diffusion models and exhibit high-quality editing results, they are difficult to apply to images with arbitrary aspect ratios or higher resolutions since they only work at the training resolutions (512x512 or 1024x1024). Naively applying patch-wise editing fails with unrealistic object structures and repetition. To address these challenges, we introduce EditCrafter, a simple yet effective editing pipeline. EditCrafter operates by first performing tiled inversion, which preserves the original identity of the input high-resolution image. We further propose a noise-damped manifold-constrained classifier-free guidance (NDCFG++) that is tailored for high resolution image editing from the inverted latent. Our experiments show that the our EditCrafter can achieve impressive editing results across various resolutions without fine-tuning and optimization.

  12. Context Unrolling in Omni Models

    We present Omni, a unified multimodal model natively trained on diverse modalities, including text, images, videos, 3D geometry, and hidden representations. We find that such training enables Context Unrolling, where the model explicitly reasons across multiple modal representations before producing predictions. This process enables the model to aggregate complementary information across heterogeneous modalities, facilitating a more faithful approximation of the shared multimodal knowledge manifold and improving downstream reasoning fidelity. As a result, Omni achieves strong performance on both multimodal generation and understanding benchmarks, while demonstrating advanced multimodal reasoning capabilities, including in-context generation of text, image, video, and 3D geometry.

  13. UniGenDet: A Unified Generative-Discriminative Framework for Co-Evolutionary Image Generation and Generated Image Detection

    In recent years, significant progress has been made in both image generation and generated image detection. Despite their rapid, yet largely independent, development, these two fields have evolved distinct architectural paradigms: the former predominantly relies on generative networks, while the latter favors discriminative frameworks. A recent trend in both domains is the use of adversarial information to enhance performance, revealing potential for synergy. However, the significant architectural divergence between them presents considerable challenges. Departing from previous approaches, we propose UniGenDet: a Unified generative-discriminative framework for co-evolutionary image Generation and generated image Detection. To bridge the task gap, we design a symbiotic multimodal self-attention mechanism and a unified fine-tuning algorithm. This synergy allows the generation task to improve the interpretability of authenticity identification, while authenticity criteria guide the creation of higher-fidelity images. Furthermore, we introduce a detector-informed generative alignment mechanism to facilitate seamless information exchange. Extensive experiments on multiple datasets demonstrate that our method achieves state-of-the-art performance. Code: https://github.com/Zhangyr2022/UniGenDet{https://github.com/Zhangyr2022/UniGenDet}.

  14. Temporally Extended Mixture-of-Experts Models

    Mixture-of-Experts models, now popular for scaling capacity at fixed inference speed, switch experts at nearly every token. Once a model outgrows available GPU memory, this churn can render optimizations like offloading and pre-fetching ineffective. We make the case that the options framework in reinforcement learning is a perfect match to tackle this problem, and argue for temporally extended mixture-of-experts layers. Building on the option-critic framework with deliberation costs, we add a controller to each layer that learns when to switch expert sets and which to load. By applying this to gpt-oss-20b with low-rank adapters and a self-distillation reward, our method reduces switch rates from over 50% to below 5% while retaining up to 90% of base-model accuracy on MATH, MMLU, and MMMLU. This shows that even existing pre-trained models can be converted to temporally extended MoEs with lightweight training, with the deliberation cost allowing model trainers to trade off switching rates against capability. We hope this opens a principled path, grounded in the options framework, for memory-efficient serving and continual learning in ever-growing MoE models.

  15. Coevolving Representations in Joint Image-Feature Diffusion

    Joint image-feature generative modeling has recently emerged as an effective strategy for improving diffusion training by coupling low-level VAE latents with high-level semantic features extracted from pre-trained visual encoders. However, existing approaches rely on a fixed representation space, constructed independently of the generative objective and kept unchanged during training. We argue that the representation space guiding diffusion should itself adapt to the generative task. To this end, we propose Coevolving Representation Diffusion (CoReDi), a framework in which the semantic representation space evolves during training by learning a lightweight linear projection jointly with the diffusion model. While naively optimizing this projection leads to degenerate solutions, we show that stable coevolution can be achieved through a combination of stop-gradient targets, normalization, and targeted regularization that prevents feature collapse. This formulation enables the semantic space to progressively specialize to the needs of image synthesis, improving its complementarity with image latents. We apply CoReDi to both VAE latent diffusion and pixel-space diffusion, demonstrating that adaptive semantic representations improve generative modeling across both settings. Experiments show that CoReDi achieves faster convergence and higher sample quality compared to joint diffusion models operating in fixed representation spaces.

Techmeme(15)

  1. Palantir Slack logs and staff interviews reveal internal debates over the company's ICE and DOD contracts during Trump's second term, its manifesto, and more (Makena Kelly/Ars Technica)

    Makena Kelly / Ars Technica : Palantir Slack logs and staff interviews reveal internal debates over the company's ICE and DOD contracts during Trump's second term, its manifesto, and more —  It took just a few months of President Donald Trump's second term for Palantir employees to question their company's commitments to civil liberties.

  2. Collov Labs, whose visual interface lets users feed images and camera input into a model that AI agents can reason over and act on, raised a $23M Series A (Chris Metinko/Axios)

    Chris Metinko / Axios : Collov Labs, whose visual interface lets users feed images and camera input into a model that AI agents can reason over and act on, raised a $23M Series A —  Collov Labs, which turns images and camera input into real-world actions through AI, raised a $23 million Series A …

  3. A profile of Dwarkesh Patel, whose podcast has become mandatory listening in the AI community, with guests like Jensen Huang, Elon Musk, and Mark Zuckerberg (Benjamin Wallace/New York Times)

    Benjamin Wallace / New York Times : A profile of Dwarkesh Patel, whose podcast has become mandatory listening in the AI community, with guests like Jensen Huang, Elon Musk, and Mark Zuckerberg —  Dwarkesh Patel was a bored college sophomore looking for intellectual stimulation.  Now he commands interviews with Jensen Huang …

  4. Sources: Tokyo-based humanoid robotics startup Genki Robotics, co-founded by Andy Rubin, raised a Series A at a ~$1B valuation; it raised a ~$50M seed in 2025 (Lucinda Shen/Axios)

    Lucinda Shen / Axios : Sources: Tokyo-based humanoid robotics startup Genki Robotics, co-founded by Andy Rubin, raised a Series A at a ~$1B valuation; it raised a ~$50M seed in 2025 —  Genki Robotics, a humanoid robotics company co-founded by Android creator Andy Rubin, is valued at about $1 billion following its recent Series A, Axios Pro has learned.

  5. ASML says it plans to make at least 60 of its standard EUV machines this year, 36% more than it sold in 2025, as it races to meet demand for making AI chips (Kim Mackrael/Wall Street Journal)

    Kim Mackrael / Wall Street Journal : ASML says it plans to make at least 60 of its standard EUV machines this year, 36% more than it sold in 2025, as it races to meet demand for making AI chips —  ASML is the only supplier of the machines needed to make cutting edge AI chips at scale  —  VELDHOVEN, the Netherlands …

  6. Inside Andon Market, billed as the first retail boutique run by an AI agent; the Andon Labs experiment uses a Claude Sonnet 4.6-based agent to run the boutique (Heather Knight/New York Times)

    Heather Knight / New York Times : Inside Andon Market, billed as the first retail boutique run by an AI agent; the Andon Labs experiment uses a Claude Sonnet 4.6-based agent to run the boutique —  Andon Market in San Francisco is billed as the first retail boutique run by an artificial intelligence agent.

  7. An interview with Xbox CEO Asha Sharma and EVP Matt Booty on the "Return of Xbox" memo, making Xbox Series X and S the "first-class experience again", and more (Stephen Totilo/Game File)

    Stephen Totilo / Game File : An interview with Xbox CEO Asha Sharma and EVP Matt Booty on the “Return of Xbox” memo, making Xbox Series X and S the “first-class experience again”, and more —  In a memo to Microsoft's video game team yesterday, the company's new CEO of Xbox, Asha Sharma …

  8. Epoch AI: Google controls ~25% of global AI compute, with ~3.8M TPUs and 1.3M GPUs; Google Cloud CEO Thomas Kurian says demand and revenue justify the spend (Stephen Morris/Financial Times)

    Stephen Morris / Financial Times : Epoch AI: Google controls ~25% of global AI compute, with ~3.8M TPUs and 1.3M GPUs; Google Cloud CEO Thomas Kurian says demand and revenue justify the spend —  Thomas Kurian, Google Cloud's CEO, says its AI chips and models can help the data centre business gain ground

  9. Survey of 1,050 Australian teens: ~60% said they retained access to social media accounts after ban; two-thirds say platforms took no action to remove accounts (Sasha Rogelberg/Fortune)

    Sasha Rogelberg / Fortune : Survey of 1,050 Australian teens: ~60% said they retained access to social media accounts after ban; two-thirds say platforms took no action to remove accounts —  If teenagers have a will, they will find a way. … Evelyn, a 14-year-old in New South Wales, told The Washington Post in December 2025 …

  10. A look at growing and increasingly public personal security measures for tech executives as public sentiment turns darkly negative on AI (Eli Rosenberg/The Information)

    Eli Rosenberg / The Information : A look at growing and increasingly public personal security measures for tech executives as public sentiment turns darkly negative on AI —  Nvidia CEO Jensen Huang cuts a distinctive image, habitually clad in one of his many leather jackets.  But it wasn't Huang's outfit that caught …

  11. A look at Strider, an intelligence firm that claims to use agentic AI and public records to let the US Air Force, NATO, and others identify foreign state actors (Jamie Tarabay/Bloomberg)

    Jamie Tarabay / Bloomberg : A look at Strider, an intelligence firm that claims to use agentic AI and public records to let the US Air Force, NATO, and others identify foreign state actors —  Trump's economic crackdown on China's US interests is creating opportunities for a Utah-based intelligence firm

  12. Trump hosted a gala luncheon for leading $TRUMP holders, where he spoke about his pro-crypto policies, but didn't mention the memecoin's declining value (Wall Street Journal)

    Wall Street Journal : Trump hosted a gala luncheon for leading $TRUMP holders, where he spoke about his pro-crypto policies, but didn't mention the memecoin's declining value —  The president's address touched on everything from the war in Iran to his policies supporting the crypto industry

  13. A look at Tin Can's $100 retro-style, Wi-Fi-enabled landline phone, and how some schools are seeding the device to students in an attempt to curb smartphone use (Samantha Kelly/Bloomberg)

    Samantha Kelly / Bloomberg : A look at Tin Can's $100 retro-style, Wi-Fi-enabled landline phone, and how some schools are seeding the device to students in an attempt to curb smartphone use —  The Tin Can has gone viral over the past year, mostly through word of mouth.  Now schools want to get even more of them in students' homes.

  14. A look at "Stanford inside Stanford", where VCs pursue 18- and 19-year-old students, offering mentorship and funding in a bid to convert promise into profit (Theo Baker/The Atlantic)

    Theo Baker / The Atlantic : A look at “Stanford inside Stanford”, where VCs pursue 18- and 19-year-old students, offering mentorship and funding in a bid to convert promise into profit —  Silicon Valley venture capitalists are wining and dining 18-year-olds.  —  When i was a freshman at Stanford University …

  15. How the Vatican is moving faster than most legacy institutions to shape AI rules and guardrails, with an AI framework, banning use of AI to write homilies, more (Russell Contreras/Axios)

    Russell Contreras / Axios : How the Vatican is moving faster than most legacy institutions to shape AI rules and guardrails, with an AI framework, banning use of AI to write homilies, more —  The Vatican is racing to build digital defenses for the artificial intelligence era — and quietly positioning itself as a global referee of what's real.

Solidot(15)

  1. 座头鲸数量恢复,形成大规模超群

    20 世纪的大规模工业捕鲸几乎导致座头鲸灭绝,海洋中剩下的鲸鱼不足工业捕鲸前数量的 5%。40 年前一项全球捕鲸禁令生效,座头鲸种群开始恢复。尽管部分座头鲸种群仍然濒临灭绝,但总体数量正在上升。越来越多地方报告目击到座头鲸的“超群(super-groups)”——指的是 20 头或以上的鲸鱼紧密聚集在一起。座头鲸生活在世界各大洋中,每年都会进行地球哺乳动物中最壮观的迁徙,行程可达 8000 公里,从温暖的繁殖地迁徙到寒冷的觅食地。在此过程中它们将大量的营养物质输送到世界各地,对海洋生态系统的健康至关重要。从2015 年到 2020 年,南非西海岸座头鲸超群目击次数从每年 10 次飙升至 65 次。2025 年 12 月 29 日,两位摄影师在南非西海岸一天内拍摄到了 208 头座头鲸,第二天更是达到了 304 头,这是历史上单日观测到的大型鲸鱼数量最多的一次。

  2. 三星移动业务可能首次出现亏损

    由于内存成本上涨、竞争加剧以及折叠屏手机和智能手表等产品面临的压力,三星移动部门可能在 2026 年出现首次年度亏损。移动业务一直是三星的重要支柱,该业务可能亏损对公司的整体业绩构成了严重威胁。如果预测成真,这将是三星移动业务成立至今首次出现年度亏损。

  3. 挪威计划禁止 16 岁以下儿童使用社交媒体

    挪威计划禁止 16 岁以下儿童使用社交媒体,加入了澳大利亚、法国、奥地利、印度尼西亚和丹麦等国家的行列。挪威政府周五表示,社媒禁令是在公众强烈要求下提出的,政府计划年底前将法案提交议会。政府称,限制使用社媒的禁令将持续到儿童年满 16 岁当年的 1 月 1 日,由科技公司负责年龄验证。首相 Jonas Gahr Store 表示:“我们希望孩子拥有一个真正属于自己的童年。游戏、友谊和日常生活不应被算法和屏幕所占据。”挪威数字部长 Karianne Tung 在声明中表示:“不能让孩子承担远离他们被禁止使用的平台的责任。责任应该由提供服务的公司承担。”

  4. 韩国逮捕一名发布逃跑公狼 AI 照片的男子

    韩国大田市 O-World 动物园一只 2 岁公狼于 4 月 8 日从围栏跳出,当地警方和消防部门随后连夜搜寻。直至 4 月 17 日这只叫 Neukgu 的公狼被安全捕获并送回园内。韩国警方本周逮捕了一名 40 岁的男子,他被控发布了一张逃跑公狼在路口出没的 AI 照片误导了搜索人员干扰了搜索工作。这张 AI 照片促使大田市政府向居民发出紧急短信,警告他们路口有狼出没。该男子在接受警方讯问时称,他“只是为了好玩”。当局正以欺骗手段扰乱政府工作罪对其展开调查,该罪最高可判处五年监禁或最高罚款 1000 万韩元。

  5. 科技公司如何走向邪恶

    2025 年 9 月和 10 月 Peter Thiel 在旧金山的 Commonwealth 俱乐部发表了四场敌基督主题的演讲。他说,17-18 世纪的敌基督形象会是类似“奇爱博士(Dr. Strangelove)”的疯狂科学家,21 世纪的敌基督则是卢德分子,类似 Greta Thunberg(瑞典气候活动人士)。阶级斗争都没有这么精神错乱。美国的富豪统治集团很少将经济自利包装成宗教使命,但今天的科技精英却将科技行业的未来繁荣描绘成一场对抗撒旦爪牙的战争。哥伦比亚大学法学教授吴修铭(Tim Wu)指出,对科技精英而言 AGI 相当于基督再临。为了奇点降临科技精英愿意不惜一切代价阻止反对者。不管谈论的是字面意义还是比喻意义上的上帝,最核心的问题仍然是金钱。科技巨头强烈反对政府干预,它们在 AI 上的投资规模堪称史无前例,仅 2026 年就将达到 6700 亿美元,占到了美国 GDP 的 2.1%。 利益关系推动硅谷投向共和党的怀抱。2020 年科技行业 98% 的捐款都流向了民主党。但到 2025 年底,近四分之三的科技行业政治支出流向了共和党,其中马斯克为共和党的选举捐赠了 3.51 亿美元。科技公司的游说支出在十年前不到制药巨头一半,如今迅速提升到了四分之三。科技行业在一代人前还承诺把权力和信息下放给平民,如今它们迷恋于监控、虚假信息、垄断和毁灭。科技最初赋能人类,后来却演变成科技赋能于各种掠夺者——最终科技平台本身也沦为企业掠夺者。科技公司何时变得反社会?这种转变是渐进的,是在它们的 CEO 们积累了越来越多的权力和金钱之后。Peter Thiel 从来不是一位自由主义者,他在 2009 年就宣称他不再相信自由和民主能兼容,他在 2014 年写道垄断是所有成功企业的必要条件。吴修铭视 2010 年代是一个转折点,在这之前亚马逊和 Google 等企业就像是慈善机构一样运营,亚马逊成立后的第一个十年基本没有盈利,而 Google 的座右铭是“不作恶”。到了 2010 年代它们开始将股东利益放在第一位,Google 抛弃了不作恶。科技公司认识到了它们所拥有的权力:亚马逊决定了购物方式,Google 决定了获取知识的方式,Facebook 决定了沟通的方式。科技巨头在贪婪上与镀金时代的强盗大亨不相上下,人类将面临一场漫长而艰苦的战斗,因为它们都富可敌国。

  6. 最早的巨型章鱼是海洋顶级掠食者

    根据发表在《科学》期刊上的一项研究,研究人员从白垩纪晚期(1 亿——7200 万年前)的沉积物中发现了 12 个有鳍章鱼的颌骨。研究人员确定了两个主要物种——Nanaimoteuthis jeletzkyi 和 N. haggarti。这些有鳍章鱼,特别是 N. haggarti,其体型异常庞大:长度在 7 米到 19 米之间,足以与同时代的巨型海洋爬行动物媲美,可能代表了目前已知的最大型无脊椎动物。在体型最大的个体中,其颌骨呈大面积磨损。这些磨损模式表明,它们是活跃的食肉动物,它们经常会强力咬碎猎物坚硬的外壳和骨骼,它们还会利用其修长灵动的腕足捕捉体型可观的猎物,同时用其强健的喙部将猎物撕解,这种行为被认为与高等智力关联。这些巨型章鱼是海洋生态系统中的积极参与者,扮演着此前只属于大型脊椎动物才有的角色。

  7. Firefox 悄悄集成 Brave 的 Adblock 引擎

    Mozilla 上个月释出的 Firefox 149 悄悄集成了 Brave 的开源 Adblock 引擎 adblock-rust。adblock-rust 在默认情况下没有启用,也没有 UI 或内容过滤列表。dblock-rust 是 Brave 内置广告屏蔽器使用的引擎,使用 Rust 开发,采用 MPL-2.0 许可授权,能处理网络请求拦截、过滤特定元素样式(cosmetic filtering),兼容 uBlock Origin 的过滤列表语法。Firefox 分支 Waterfox 也采用了 adblock-rust。

  8. 减少接触塑料会在短期内大幅减少体内的塑料物质含量

    根据一项研究,减少接触塑料会在短期内减少体内的塑料物质含量最多五成。研究主要针对两种来自塑料制品的化学物质——邻苯二甲酸酯(phthalates)和双酚(bisphenols)。211 名试验参与者的尿检结果显示他们体内都有较高含量的塑料化学物质。其中 60 名参与者参与了一项随机对照试验。研究人员与农民和生产商合作,提供在生产和包装过程中完全不用塑料的食品。一周后相比对照组,减少接触塑料的参与者体内邻苯二甲酸酯减少了 44%,双酚减少了逾 50%。

  9. 英国生物银行 50 万参与者健康数据泄漏

    英国生物银行(UK Biobank)有 50 万参与者的健康数据泄漏,泄漏数据集在阿里巴巴上出售。英国科技大臣 Ian Murray 称英国生物银行运营机构已向政府通报了这起事件,泄露的信息不包含姓名、地址、联系方式或电话号码,但可能包括性别、年龄、出生年月、社会经济地位、生活习惯以及生物样本的测量数据。英国生物银行收集志愿者提供的健康数据,被用于帮助改进痴呆症、癌症和帕金森病的检测和治疗。有逾 1.8 万篇论文使用了英国生物银行的数据。阿里巴巴已经删除了数据,但相关数据又被上传到了 GitHub 上,英国生物银行向 GitHub 递交了大量 DMCA 删除通知。

  10. 天文学家发现银河系的边缘

    银河系是圆盘状恒星,也就是恒星的形成是由里向外,这意味着越远的恒星越年轻。天文学家分析了银河系逾 10 万颗巨型恒星,发现恒星年龄分布由内向外模式在距离银河系中心 3.5-4 万光年间发生逆转,超过该距离恒星年龄变大了。这种突变形成了 U 型年龄分布,曲线最低点对应的恒星形成率急剧下降,代表着银河系恒星形成盘的边界。天文学家认为他们识别了银河系圆盘的边缘。

  11. NASA Roman 太空望远镜最早九月发射

    NASA 宣布 Nancy Grace Roman 太空望远镜最早今年九月发射最晚 2027 年 5 月发射。望远镜以 NASA 首任天文学部门女主任的名字命名,它属于红外望远镜,核心任务包括探测暗能量、发现系外行星及验证广义相对论宇宙时空曲率。望远镜使用美国国家侦察局捐赠的 2.4 米口径主镜(当年 NRO 捐赠了两台主镜),配备了两台科学仪器:3 亿像素多波段红外相机大视场仪表(WFI),能直接观测邻近恒星周围的类木行星的日冕仪(CGI)。

  12. 深度求索发布 DeepSeek-V4 预览版

    深度求索发布了 DeepSeek-V4 预览版。DeepSeek-V4 有两个版本,其中 Pro 版本有 1.6 万亿参数其中 490 亿活跃参数;Flash 版本有 2840 亿参数其中活跃参数 130 亿。两个版本都支持百万上下文。DeepSeek V4 除了支持英伟达 GPU 还支持华为昇腾 NPU。深度求索称,在 Agentic Coding 评测中,V4-Pro 已达到当前开源模型最佳水平,并在其他 Agent 相关评测中同样表现优异;Pro 在世界知识测评中,大幅领先其他开源模型,仅稍逊于顶尖闭源模型 Gemini-Pro-3.1;在数学、STEM、竞赛型代码的测评中,V4-Pro 超越当前所有已公开评测的开源模型。

  13. Tim Cook 称 2012 年发布 Apple Maps 是其任内第一个重大失误

    在最近举行的员工大会上,即将卸任的苹果 CEO 库克(Tim Cook)称 2012 年上线 Apple Maps 是其任内第一个真正的重大失误。当时的 Apple Maps 还比较原始,尚未准备好。它的失败迫使库克向用户道歉,并建议用户使用竞争对手的地图应用。库克表示这次重大失败还是富有价值的,“这体现了我们始终将用户放置在决策中心。现在我们拥有全世界最好的地图应用。我们学会了坚持,犯了错误但做了正确的事。”

  14. 心脏跳动能抑制心脏癌生长

    根据发表在《科学》期刊上的一项研究,心脏持续不断的跳动可能会主动抑制心脏组织中的肿瘤生长。这些组织中的细胞通路会改变癌细胞的基因调控方式,从而阻止其增殖。这些发现揭示了机械力在保护心脏免受癌症侵害方面的作用,并可能为基于机械刺激的新型癌症疗法做好铺垫。在哺乳动物中,心脏癌极为罕见。更重要的是,成年人的心脏自我更新能力有限,心肌细胞的再生率每年约为 1%。人们对这些特征所提出的一种解释是:心脏组织承受着巨大的机械负荷,它必须克服很大的阻力而持续泵血。这种持续的压力似乎会抑制心脏细胞的增殖能力。通过利用基因工程小鼠模型,研究人员发现,即使引入了强力致癌突变,心脏对其也具有显著的抗性。研究发现,组织内的机械力会重塑癌细胞基因组的调节格局,从而影响癌细胞是否能够增殖。Nesprin-2 是这一过程的核心,这是一种将细胞表面的机械信号传递至细胞核的蛋白。作为 LINC 复合物的一个组成部分,Nesprin-2 可感知心脏的机械微环境,并可在功能上改变染色质结构和组蛋白甲基化,从而降低与肿瘤细胞增殖相关的基因活性。当癌细胞中的 Nesprin-2 被沉默后,这些细胞在机械活动的心脏中会重新获得生长能力并形成肿瘤。

  15. Ubuntu 26.04 LTS 释出

    Canonical 释出了代号为 Resolute Raccoon 的 Ubuntu 26.04 LTS。同时释出的还有衍生版本 Edubuntu、Kubuntu、Lubuntu、Ubuntu Budgie、Ubuntu Cinnamon、Ubuntu Kylin、Ubuntu Studio、Ubuntu Unity 和 Xubuntu。Ubuntu Desktop、Ubuntu Server、Ubuntu Cloud、Ubuntu WSL 和 Ubuntu Core 将获得五年的支持,其余版本获得三年的支持,付费扩展支持 ESM (Expanded Security Maintenance)为十年 。Ubuntu 26.04 采用最新的 Linux 7.0 kernel,GNOME 50 桌面环境,引入了基于 TPM 的全盘加密,GStreamer 1.28,沙盒图形加载,Chrony 4.8,等等。