DIGEST · 2026-03-28

OrangeBot.AI Digest — 2026-03-28

84 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Further human + AI + proof assistant work on Knuth's "Claude Cycles" problem (twitter.com)
  2. Founder of GitLab battles cancer by founding companies (sytse.com)
  3. Linux is an interpreter (astrid.tech)
  4. I decompiled the White House's new app (thereallo.dev)
  5. Folk are getting dangerously attached to AI that always tells them they're right (www.theregister.com)
  6. AI overly affirms users asking for personal advice (news.stanford.edu)
  7. Improved Git Diffs with Delta, Fzf and a Little Shell Scripting (nickjanetakis.com)
  8. Britain today generating 90%+ of electricity from renewables (grid.iamkate.com)
  9. Spanish legislation as a Git repo (github.com)
  10. I Built an Open-World Engine for the N64 [video] (www.youtube.com)
  11. Paper Tape Is All You Need – Training a Transformer on a 1976 Minicomputer (github.com)
  12. Cocoa-Way – Native macOS Wayland compositor for running Linux apps seamlessly (github.com)
  13. Treason in the Futures Markets (paulkrugman.substack.com)
  14. CERN uses ultra-compact AI models on FPGAs for real-time LHC data filtering (theopenreader.org)
  15. The bee that everyone wants to save (naturalist.bearblog.dev)

GitHub Trending(9)

  1. hacksider / Deep-Live-Cam
  2. obra / superpowers
  3. SakanaAI / AI-Scientist-v2
  4. virattt / dexter
  5. twentyhq / twenty
  6. onyx-dot-app / onyx
  7. datalab-to / chandra
  8. agentscope-ai / agentscope
  9. apache / superset

Product Hunt(15)

  1. WordPress Studio CLI

    WordPress Studio now has an independently installable CLI

  2. Bulk Exporter for Sora

    1-click backup for your Sora videos, images & prompts.

  3. Lexaclaw

    Startup legal compliance built on OpenClaw

  4. Cohere Transcribe

    New state-of-the-art in open source speech recognition

  5. Crossnode

    Vibe code AI agents and put them behind a payment wall

  6. Glance

    Real browser for Claude Code Test, Screenshot, Automate

  7. RepoLens

    Know what changed and what matters across your codebase

  8. BNA

    AI agent that builds full-stack iOS & Android apps with auth

  9. Aera Browser

    The browser built for automation

  10. Domscribe

    Give your AI coding agent eyes on your running frontend

  11. CrabTalk

    The agent daemon that hides nothing. 8MB. Open Source

  12. Spokk

    Feedback, reviews, loyalty & referrals for SMB

  13. SlapMac

    Slap your MacBook. It screams back. That's it.

  14. DwellRecord

    Keep your home records all together

  15. Expect

    Let agents test your code in a real browser

Hugging Face(15)

  1. PixelSmile: Toward Fine-Grained Facial Expression Editing

    Fine-grained facial expression editing has long been limited by intrinsic semantic overlap. To address this, we construct the Flex Facial Expression (FFE) dataset with continuous affective annotations and establish FFE-Bench to evaluate structural confusion, editing accuracy, linear controllability, and the trade-off between expression editing and identity preservation. We propose PixelSmile, a diffusion framework that disentangles expression semantics via fully symmetric joint training. PixelSmile combines intensity supervision with contrastive learning to produce stronger and more distinguishable expressions, achieving precise and stable linear expression control through textual latent interpolation. Extensive experiments demonstrate that PixelSmile achieves superior disentanglement and robust identity preservation, confirming its effectiveness for continuous, controllable, and fine-grained expression editing, while naturally supporting smooth expression blending.

  2. Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale

    We introduce Intern-S1-Pro, the first one-trillion-parameter scientific multimodal foundation model. Scaling to this unprecedented size, the model delivers a comprehensive enhancement across both general and scientific domains. Beyond stronger reasoning and image-text understanding capabilities, its intelligence is augmented with advanced agent capabilities. Simultaneously, its scientific expertise has been vastly expanded to master over 100 specialized tasks across critical science fields, including chemistry, materials, life sciences, and earth sciences. Achieving this massive scale is made possible by the robust infrastructure support of XTuner and LMDeploy, which facilitates highly efficient Reinforcement Learning (RL) training at the 1-trillion parameter level while ensuring strict precision consistency between training and inference. By seamlessly integrating these advancements, Intern-S1-Pro further fortifies the fusion of general and specialized intelligence, working as a Specializable Generalist, demonstrating its position in the top tier of open-source models for general capabilities, while outperforming proprietary models in the depth of specialized scientific tasks.

  3. Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration

    In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just ~100 parameters. Experimental results reveal that despite its lightweight design, Calibri consistently improves performance across various text-to-image models. Notably, Calibri also reduces the inference steps required for image generation, all while maintaining high-quality outputs.

  4. RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models

    Image restoration under real-world degradations is critical for downstream tasks such as autonomous driving and object detection. However, existing restoration models are often limited by the scale and distribution of their training data, resulting in poor generalization to real-world scenarios. Recently, large-scale image editing models have shown strong generalization ability in restoration tasks, especially for closed-source models like Nano Banana Pro, which can restore images while preserving consistency. Nevertheless, achieving such performance with those large universal models requires substantial data and computational costs. To address this issue, we construct a large-scale dataset covering nine common real-world degradation types and train a state-of-the-art open-source model to narrow the gap with closed-source alternatives. Furthermore, we introduce RealIR-Bench, which contains 464 real-world degraded images and tailored evaluation metrics focusing on degradation removal and consistency preservation. Extensive experiments demonstrate our model ranks first among open-source methods, achieving state-of-the-art performance.

  5. Voxtral TTS

    We introduce Voxtral TTS, an expressive multilingual text-to-speech model that generates natural speech from as little as 3 seconds of reference audio. Voxtral TTS adopts a hybrid architecture that combines auto-regressive generation of semantic speech tokens with flow-matching for acoustic tokens. These tokens are encoded and decoded with Voxtral Codec, a speech tokenizer trained from scratch with a hybrid VQ-FSQ quantization scheme. In human evaluations conducted by native speakers, Voxtral TTS is preferred for multilingual voice cloning due to its naturalness and expressivity, achieving a 68.4\% win rate over ElevenLabs Flash v2.5. We release the model weights under a CC BY-NC license.

  6. MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data

    Generating images conditioned on multiple visual references is critical for real-world applications such as multi-subject composition, narrative illustration, and novel view synthesis, yet current models suffer from severe performance degradation as the number of input references grows. We identify the root cause as a fundamental data bottleneck: existing datasets are dominated by single- or few-reference pairs and lack the structured, long-context supervision needed to learn dense inter-reference dependencies. To address this, we introduce MacroData, a large-scale dataset of 400K samples, each containing up to 10 reference images, systematically organized across four complementary dimensions -- Customization, Illustration, Spatial reasoning, and Temporal dynamics -- to provide comprehensive coverage of the multi-reference generation space. Recognizing the concurrent absence of standardized evaluation protocols, we further propose MacroBench, a benchmark of 4,000 samples that assesses generative coherence across graded task dimensions and input scales. Extensive experiments show that fine-tuning on MacroData yields substantial improvements in multi-reference generation, and ablation studies further reveal synergistic benefits of cross-task co-training and effective strategies for handling long-context complexity. The dataset and benchmark will be publicly released.

  7. MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens

    Long-term memory is a cornerstone of human intelligence. Enabling AI to process lifetime-scale information remains a long-standing pursuit in the field. Due to the constraints of full-attention architectures, the effective context length of large language models (LLMs) is typically limited to 1M tokens. Existing approaches, such as hybrid linear attention, fixed-size memory states (e.g., RNNs), and external storage methods like RAG or agent systems, attempt to extend this limit. However, they often suffer from severe precision degradation and rapidly increasing latency as context length grows, an inability to dynamically modify memory content, or a lack of end-to-end optimization. These bottlenecks impede complex scenarios like large-corpus summarization, Digital Twins, and long-history agent reasoning, while limiting memory capacity and slowing inference. We present Memory Sparse Attention (MSA), an end-to-end trainable, efficient, and massively scalable memory model framework. Through core innovations including scalable sparse attention and document-wise RoPE, MSA achieves linear complexity in both training and inference while maintaining exceptional stability, exhibiting less than 9% degradation when scaling from 16K to 100M tokens. Furthermore, KV cache compression, combined with Memory Parallel, enables 100M-token inference on 2xA800 GPUs. We also propose Memory Interleaving to facilitate complex multi-hop reasoning across scattered memory segments. MSA significantly surpasses frontier LLMs, state-of-the-art RAG systems, and leading memory agents in long-context benchmarks. These results demonstrate that by decoupling memory capacity from reasoning, MSA provides a scalable foundation to endow general-purpose models with intrinsic, lifetime-scale memory.

  8. SlopCodeBench: Benchmarking How Coding Agents Degrade Over Long-Horizon Iterative Tasks

    Software development is iterative, yet agentic coding benchmarks overwhelmingly evaluate single-shot solutions against complete specifications. Code can pass the test suite but become progressively harder to extend. Recent iterative benchmarks attempt to close this gap, but constrain the agent's design decisions too tightly to faithfully measure how code quality shapes future extensions. We introduce SlopCodeBench, a language-agnostic benchmark comprising 20 problems and 93 checkpoints, in which agents repeatedly extend their own prior solutions under evolving specifications that force architectural decisions without prescribing internal structure. We track two trajectory-level quality signals: verbosity, the fraction of redundant or duplicated code, and structural erosion, the share of complexity mass concentrated in high-complexity functions. No agent solves any problem end-to-end across 11 models; the highest checkpoint solve rate is 17.2%. Quality degrades steadily: erosion rises in 80% of trajectories and verbosity in 89.8%. Against 48 open-source Python repositories, agent code is 2.2x more verbose and markedly more eroded. Tracking 20 of those repositories over time shows that human code stays flat, while agent code deteriorates with each iteration. A prompt-intervention study shows that initial quality can be improved, but it does not halt degradation. These results demonstrate that pass-rate benchmarks systematically undermeasure extension robustness, and that current agents lack the design discipline iterative software development demands.

  9. AVControl: Efficient Framework for Training Audio-Visual Controls

    Controlling video and audio generation requires diverse modalities, from depth and pose to camera trajectories and audio transformations, yet existing approaches either train a single monolithic model for a fixed set of controls or introduce costly architectural changes for each new modality. We introduce AVControl, a lightweight, extendable framework built on LTX-2, a joint audio-visual foundation model, where each control modality is trained as a separate LoRA on a parallel canvas that provides the reference signal as additional tokens in the attention layers, requiring no architectural changes beyond the LoRA adapters themselves. We show that simply extending image-based in-context methods to video fails for structural control, and that our parallel canvas approach resolves this. On the VACE Benchmark, we outperform all evaluated baselines on depth- and pose-guided generation, inpainting, and outpainting, and show competitive results on camera control and audio-visual benchmarks. Our framework supports a diverse set of independently trained modalities: spatially-aligned controls such as depth, pose, and edges, camera trajectory with intrinsics, sparse motion control, video editing, and, to our knowledge, the first modular audio-visual controls for a joint generation model. Our method is both compute- and data-efficient: each modality requires only a small dataset and converges within a few hundred to a few thousand training steps, a fraction of the budget of monolithic alternatives. We publicly release our code and trained LoRA checkpoints.

  10. VFIG: Vectorizing Complex Figures in SVG with Vision-Language Models

    Scalable Vector Graphics (SVG) are an essential format for technical illustration and digital design, offering precise resolution independence and flexible semantic editability. In practice, however, original vector source files are frequently lost or inaccessible, leaving only "flat" rasterized versions (e.g., PNG or JPEG) that are difficult to modify or scale. Manually reconstructing these figures is a prohibitively labor-intensive process, requiring specialized expertise to recover the original geometric intent. To bridge this gap, we propose VFIG, a family of Vision-Language Models trained for complex and high-fidelity figure-to-SVG conversion. While this task is inherently data-driven, existing datasets are typically small-scale and lack the complexity of professional diagrams. We address this by introducing VFIG-DATA, a large-scale dataset of 66K high-quality figure-SVG pairs, curated from a diverse mix of real-world paper figures and procedurally generated diagrams. Recognizing that SVGs are composed of recurring primitives and hierarchical local structures, we introduce a coarse-to-fine training curriculum that begins with supervised fine-tuning (SFT) to learn atomic primitives and transitions to reinforcement learning (RL) refinement to optimize global diagram fidelity, layout consistency, and topological edge cases. Finally, we introduce VFIG-BENCH, a comprehensive evaluation suite with novel metrics designed to measure the structural integrity of complex figures. VFIG achieves state-of-the-art performance among open-source models and performs on par with GPT-5.2, achieving a VLM-Judge score of 0.829 on VFIG-BENCH.

  11. MuRF: Unlocking the Multi-Scale Potential of Vision Foundation Models

    Vision Foundation Models (VFMs) have become the cornerstone of modern computer vision, offering robust representations across a wide array of tasks. While recent advances allow these models to handle varying input sizes during training, inference typically remains restricted to a single, fixed scale. This prevalent single-scale paradigm overlooks a fundamental property of visual perception: varying resolutions offer complementary inductive biases, where low-resolution views excel at global semantic recognition and high-resolution views are essential for fine-grained refinement. In this work, we propose Multi-Resolution Fusion (MuRF), a simple yet universally effective strategy to harness this synergy at inference time. Instead of relying on a single view, MuRF constructs a unified representation by processing an image at multiple resolutions through a frozen VFM and fusing the resulting features. The universality of MuRF is its most compelling attribute. It is not tied to a specific architecture, serving instead as a fundamental, training-free enhancement to visual representation. We empirically validate this by applying MuRF to a broad spectrum of critical computer vision tasks across multiple distinct VFM families - primarily DINOv2, but also demonstrating successful generalization to contrastive models like SigLIP2.

  12. Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting

    Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/

  13. MemMA: Coordinating the Memory Cycle through Multi-Agent Reasoning and In-Situ Self-Evolution

    Memory-augmented LLM agents maintain external memory banks to support long-horizon interaction, yet most existing systems treat construction, retrieval, and utilization as isolated subroutines. This creates two coupled challenges: strategic blindness on the forward path of the memory cycle, where construction and retrieval are driven by local heuristics rather than explicit strategic reasoning, and sparse, delayed supervision on the backward path, where downstream failures rarely translate into direct repairs of the memory bank. To address these challenges, we propose MemMA, a plug-and-play multi-agent framework that coordinates the memory cycle along both the forward and backward paths. On the forward path, a Meta-Thinker produces structured guidance that steers a Memory Manager during construction and directs a Query Reasoner during iterative retrieval. On the backward path, MemMA introduces in-situ self-evolving memory construction, which synthesizes probe QA pairs, verifies the current memory, and converts failures into repair actions before the memory is finalized. Extensive experiments on LoCoMo show that MemMA consistently outperforms existing baselines across multiple LLM backbones and improves three different storage backends in a plug-and-play manner. Our code is publicly available at https://github.com/ventr1c/memma.

  14. Representation Alignment for Just Image Transformers is not Easier than You Think

    Representation Alignment (REPA) has emerged as a simple way to accelerate Diffusion Transformers training in latent space. At the same time, pixel-space diffusion transformers such as Just image Transformers (JiT) have attracted growing attention because they remove a dependency on a pretrained tokenizer, and then avoid the reconstruction bottleneck of latent diffusion. This paper shows that the REPA can fail for JiT. REPA yields worse FID for JiT as training proceeds and collapses diversity on image subsets that are tightly clustered in the representation space of pretrained semantic encoder on ImageNet. We trace the failure to an information asymmetry: denoising occurs in the high dimensional image space, while the semantic target is strongly compressed, making direct regression a shortcut objective. We propose PixelREPA, which transforms the alignment target and constrains alignment with a Masked Transformer Adapter that combines a shallow transformer adapter with partial token masking. PixelREPA improves both training convergence and final quality. PixelREPA reduces FID from 3.66 to 3.17 for JiT-B/16 and improves Inception Score (IS) from 275.1 to 284.6 on ImageNet 256 times 256, while achieving > 2times faster convergence. Finally, PixelREPA-H/16 achieves FID=1.81 and IS=317.2. Our code is available at https://github.com/kaist-cvml/PixelREPA.

  15. AVO: Agentic Variation Operators for Autonomous Evolutionary Search

    Agentic Variation Operators (AVO) are a new family of evolutionary variation operators that replace the fixed mutation, crossover, and hand-designed heuristics of classical evolutionary search with autonomous coding agents. Rather than confining a language model to candidate generation within a prescribed pipeline, AVO instantiates variation as a self-directed agent loop that can consult the current lineage, a domain-specific knowledge base, and execution feedback to propose, repair, critique, and verify implementation edits. We evaluate AVO on attention, among the most aggressively optimized kernel targets in AI, on NVIDIA Blackwell (B200) GPUs. Over 7 days of continuous autonomous evolution on multi-head attention, AVO discovers kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5% across the evaluated configurations. The discovered optimizations transfer readily to grouped-query attention, requiring only 30 minutes of additional autonomous adaptation and yielding gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4. Together, these results show that agentic variation operators move beyond prior LLM-in-the-loop evolutionary pipelines by elevating the agent from candidate generator to variation operator, and can discover performance-critical micro-architectural optimizations that produce kernels surpassing state-of-the-art expert-engineered attention implementations on today's most advanced GPU hardware.

Techmeme(15)

  1. Report analyzing payments of 28M US consumers shows Claude adding paid subs at a steadily increasing pace; Anthropic: paid subs have more than doubled this year (Julie Bort/TechCrunch)

    Julie Bort / TechCrunch : Report analyzing payments of 28M US consumers shows Claude adding paid subs at a steadily increasing pace; Anthropic: paid subs have more than doubled this year —  Whatever the final outcome for Anthropic from its feud with the Department of Defense, the attention it has generated …

  2. ShinyHunters says it stole 350GB+ of data in a cyberattack on the European Commission, detected on March 24; the EC says its internal systems were not affected (Pierluigi Paganini/Security Affairs)

    Pierluigi Paganini / Security Affairs : ShinyHunters says it stole 350GB+ of data in a cyberattack on the European Commission, detected on March 24; the EC says its internal systems were not affected —  The European Commission has allegedly been breached by ShinyHunters, with reported data dumps including content from mail servers.

  3. Sources: DHS clears seven CISA staffers of wrongdoing; the staffers had been accused of misleading CISA's former acting director into taking a polygraph test (John Sakellariadis/Politico)

    John Sakellariadis / Politico : Sources: DHS clears seven CISA staffers of wrongdoing; the staffers had been accused of misleading CISA's former acting director into taking a polygraph test —  Former DHS spokesperson Tricia McLaughlin previously told POLITICO the staffers were under investigation for “misleading” …

  4. Sources: Ross Nordeen, the last remaining cofounder at xAI, left the company on Friday; Nordeen reported directly to Elon Musk as his right-hand operator (Grace Kay/Business Insider)

    Grace Kay / Business Insider : Sources: Ross Nordeen, the last remaining cofounder at xAI, left the company on Friday; Nordeen reported directly to Elon Musk as his right-hand operator —  Follow Grace Kay … - Elon Musk started xAI with 11 cofounders in 2023.  — The last remaining one, Ross Nordeen, has now exited the company.

  5. A look at the decadelong feud between Sam Altman and Dario Amodei; sources say Amodei likened Altman's legal fight with Musk to Hitler's fight with Stalin (Keach Hagey/Wall Street Journal)

    Keach Hagey / Wall Street Journal : A look at the decadelong feud between Sam Altman and Dario Amodei; sources say Amodei likened Altman's legal fight with Musk to Hitler's fight with Stalin —  Personal wounds and power struggles between the leaders of OpenAI and Anthropic are defining how the world encounters the technology

  6. Chess grandmasters find new ways to win by making less optimal moves after AI pushed classical chess toward perfect play, breathing new life into the game (Kevin Lincoln/Bloomberg)

    Kevin Lincoln / Bloomberg : Chess grandmasters find new ways to win by making less optimal moves after AI pushed classical chess toward perfect play, breathing new life into the game —  Artificial intelligence drove chess toward perfect play, leading to more draws at top tournaments.  Now grandmasters are winning by making less optimal moves.

  7. A look at some themes at this year's Hill and Valley Forum: embracing government-led industrial policy to onshore manufacturing, AI's unpopularity, and more (Newcomer)

    Newcomer : A look at some themes at this year's Hill and Valley Forum: embracing government-led industrial policy to onshore manufacturing, AI's unpopularity, and more —  Plus, Kleiner's big fundraise & a reckoning for social media … The Main Item  —  Industrial Policy Gets New Life as Tech Eyes Government Incentives

  8. Prediction market bets decided on linguistic technicalities expose how hard it is to turn language into a binary market, with payouts hinging on a single word (Christopher Beam/Bloomberg)

    Christopher Beam / Bloomberg : Prediction market bets decided on linguistic technicalities expose how hard it is to turn language into a binary market, with payouts hinging on a single word —  Bets decided on linguistic technicalities are exposing how hard it is to turn language into a binary market with payouts hinging on a single word.

  9. Meta's longtime content policy chief Monika Bickert is leaving the company to teach at Harvard; she will stay at Meta until August to work on a transition plan (Reuters)

    Reuters : Meta's longtime content policy chief Monika Bickert is leaving the company to teach at Harvard; she will stay at Meta until August to work on a transition plan —  Meta's long-time content policy chief Monika Bickert, who oversaw the writing and enforcement of Facebook's content policies …

  10. Investment in Austin-based startups grew to a record high of $7.19B in 2025, up from $4.37B in 2024 and topping a pandemic peak of $6.1B in 2021 (Mary Ann Azevedo/Crunchbase News)

    Mary Ann Azevedo / Crunchbase News : Investment in Austin-based startups grew to a record high of $7.19B in 2025, up from $4.37B in 2024 and topping a pandemic peak of $6.1B in 2021 —  At the height of the pandemic and the global shift to remote work, tech founders and investors alike flocked to Austin, Texas …

  11. Taiwanese memory chipmaker Nanya raised $2.5B in a private placement from Sandisk, SK Hynix's Solidigm, Cisco, and Kioxia to expand advanced chip production (Reuters)

    Reuters : Taiwanese memory chipmaker Nanya raised $2.5B in a private placement from Sandisk, SK Hynix's Solidigm, Cisco, and Kioxia to expand advanced chip production —  Shares of Taiwanese memory chip maker Nanya Technology (2408.TW) opened limit-up 10% on Thursday after raising about $2.5 billion …

  12. Indonesia begins implementing a regulation that bans under-16s from digital platforms that could expose them to porn, cyberbullying, online scams, and addiction (Edna Tarigan/Associated Press)

    Edna Tarigan / Associated Press : Indonesia begins implementing a regulation that bans under-16s from digital platforms that could expose them to porn, cyberbullying, online scams, and addiction —  Indonesia on Saturday began implementing a new government regulation approved earlier this month that bans children younger …

  13. Worth, which aims to help financial services onboard and underwrite SMBs, raised a $30M Series A led by Fulcrum Equity Partners, following a $25M seed round (Brian Contreras/Inc)

    Brian Contreras / Inc : Worth, which aims to help financial services onboard and underwrite SMBs, raised a $30M Series A led by Fulcrum Equity Partners, following a $25M seed round —  Being a first-time founder can be tough.  When you come back for round two, though, there are definitely a few perks.

  14. Despite Anthropic winning a ruling against the DOD in California, it must still convince the DC Circuit Court of Appeals to lift the supply chain risk label (Brendan Bordelon/Politico)

    Brendan Bordelon / Politico : Despite Anthropic winning a ruling against the DOD in California, it must still convince the DC Circuit Court of Appeals to lift the supply chain risk label —  But while Thursday's decision is a win for Anthropic, several lawyers and lobbyists said it will do little to lift the cloud …

  15. A Jeffrey Epstein victim files a class action against the Trump administration and Google, claiming Google's search and AI Mode published victims' personal info (Jennifer Elias/CNBC)

    Jennifer Elias / CNBC : A Jeffrey Epstein victim files a class action against the Trump administration and Google, claiming Google's search and AI Mode published victims' personal info —  A victim of notorious sex predator Jeffrey Epstein filed a class action lawsuit Thursday on behalf of herself and other survivors …

Solidot(15)

  1. 奥地利计划禁止 14 岁以下儿童使用社媒

    在澳大利亚、丹麦、马来西亚和挪威之后,奥地利也计划严格限制 14 岁以下青少年使用社交媒体,理由是令人上瘾的算法以及对儿童有害的内容。政府计划在 6 月底前完成立法草案,执法和年龄验证细节尚未最终敲定。社会民主党的副总理 Andreas Babler 表示,政府不能袖手旁观,任由社交媒体平台让儿童上瘾以及令儿童受到伤害,他称应该像对待酒精或烟草那样对待社交媒体。

  2. 太空中的精子会像失控宇航员那样翻滚

    根据发表在《Communications Biology》期刊上的一项研究,太空中的精子会像失控宇航员那样翻滚,找不到通向卵子的路径。研究人员使用了 3D 旋转器模拟微重力环境,将人类、小鼠和猪的精子样本放置到一个模拟女性生殖道的迷宫,出于伦理方面的考虑,迷宫并没有真的放置卵子。相比对照组,暴露在微重力环境下的人类精子成功穿过迷宫的数量减少了约 40%。添加孕酮有助于克服精子的方位障碍,研究人员认为这是因为卵子也会释放孕酮,而孕酮有帮助引导精子。

  3. Windows PC 崩溃的频率三倍于苹果 Mac

    根据 Omnissa 公司汇总 2025 年全世界零售、医疗保健、金融、教育、政府等行业客户遥测数据后发表的报告《2026 State of Digital Workspace》,Windows PC 的崩溃频率远高于 Mac。报告发现,Windows 设备被迫关机的频率是 Mac 的 3.1 倍。Windows 应用无响应的频率是 macOS 应用的 7.5 倍,需要重启的频率是 macOS 的三倍。医疗保健和制药行业逾半数 Windows 和 Android 设备落后于最新操作系统版本五个大版本,很可能导致这些设备更容易受到恶意软件的攻击,出现 bug 的频率也更高。教育行业逾半数台式机和移动设备未加密,学生的隐私可能更容易泄漏。Mac 电脑使用寿命更长,平均五年更换一次,Windows PC 平均三年更换一次。Mac 电脑 M 系列芯片平均温度为 40.1 摄氏度,而英特尔处理器平均温度为 65.2 摄氏度。

  4. “黑暗”比光速更快

    根据发表在《自然》期刊上的一项研究,对光波中“暗点”的直接测量证实,暗点比光速更快。所谓“暗点”指的是波结构中被称为“漩涡”的微小孔洞,这种孔洞在海浪、气流甚至咖啡中都十分常见。1970 年代就有人预测漩涡的移动速度比形成它的波更快。以色列理工学院的研究团队通过实验证实了这一预测。爱因斯坦相对论已经证明真空光速是速度的极限,但相对论适用的是有质量的物质和传输能量或信息的信号。光波的涡旋没有质量,也不携带能量或信息,因此没有违反相对论。光波的涡旋是光波中的“零点”,振幅降至零的位置,是光场中完全黑暗的点。

  5. 金·斯坦利·罗宾森称殖民火星是毫无意义的

    美国科幻作家金·斯坦利·罗宾森(Kim Stanley Robinson)以火星殖民以及地球化改造的三部曲——《红火星》,《绿火星》以及《蓝火星》——著称,他的《红火星》将火星殖民之旅设定在 2026 年。罗宾森在 New Scientist 上发表文章回顾了他的《红火星》以及过去几十年对火星的更深入了解,他认为目前殖民火星是没有任何意义的。《红火星》开始创作的时间是 1989-1990 年,于 1992 年出版,35 年后罗宾森称,殖民火星比以前认为的要困难得多。火星漫游车在本世纪初发现火星表面土壤掺杂着浓度百分之一的高氯酸盐,高氯酸盐在浓度百万分之一时对人类就是有毒的,火星表面对人类剧毒无比。火星更轻的重力以及无阻挡太空辐射都会人体造成伤害。火星更适合短期的科考而不是长期的殖民。更重要的是人类需要先解决在地球上制造的问题之后去另外一颗星球才有意义。罗宾森表示他对于任何殖民火星的言论都嗤之以鼻,亿万富翁们关于殖民火星的豪言壮语都纯属幻想。

  6. 长期吸入硅尘会导致矿工肺功能不可逆转的下降

    矿工在工作期间吸入结晶硅的粉尘而导致矽肺病已是众所周知。发表在《Journal of Occupational Medicine and Toxicology》期刊上的一项研究识别了可能导致矽肺病的硅尘浓度临界点。德国研究人员分析了 1970-1991 年间 1418 名矿工的 7116 次肺功能测定数据,识别了肺功能开始加速且可能不可逆转下降的临界点:如果长期暴露在每立方米超过 0.09 mg 的二氧化硅粉尘下,肺功能会加速下降,低于该浓度不会发生有害积累。

  7. PS5 各型号普涨 100 美元以上

    因内存和 SSD 价格过去几个月以一年前无法想象的速度暴涨,索尼正式宣布各型号 PS5 普涨 100 美元/100 欧元以上。PS5 标准版美版从 550 美元涨至 650 美元,数字版从 500 美元涨至 600 美元,PS5 Pro 从 750 美元涨至 900 美元。英国、欧洲以及日本的价格都有类似的大幅度上涨。

  8. Spotify 寻求安娜的档案赔偿 3 亿美元

    Spotify 和唱片公司请求法庭对影子图书馆安娜的档案(Anna’s Archive)做出 3.22 亿美元的缺席判决。安娜的档案至今未回应针对它的诉讼。Spotify 等还寻求永久禁令,试图切断安娜的档案与域名和托管服务商之间的合作,将该网站从互联网上彻底清除。Spotify 和唱片公司去年底提起的诉讼已经导致安娜的档案失去了.org 等主域名和备用域名,但并没有让安娜的档案彻底消失,只是给它带来了些不便,迫使它不断更换域名和托管商。在最新递交到法庭的文件中,Spotify 和唱片公司要求安娜的档案向 Spotify 赔偿 3 亿美元,向索尼赔偿 750 万美元,向环球唱片(UMG)赔偿 750 万美元,向华纳赔偿 720 万美元。

  9. 哈勃韦伯联合观测土星

    科学家利用哈勃望远镜的可见光以及韦伯的红外光联合观测了土星,获得了至今最全面的土星图像。哈勃捕捉到了土星云层表面的颜色变化和风暴形态;韦伯则通过红外观测揭示了不同高度大气中的云层和化学成分分布。最新图像显示,土星北半球中纬度地区存在一条被称为“带状波”的长寿命急流结构,其形态可能受到深层大气波动的影响。在其附近,科学家还识别出 2010—2012 年土星“春季大风暴”的残余结构。此外,南半球还可见多个风暴系统,显示出土星大气的高度活跃性。科学家还在图像中再次观测到土星北极著名的六边形结构。这个由喷流构成的规则几何图形早在 1981 年就被探测器发现,几十年来始终存在,被认为是太阳系最奇特的天气现象之一。新的观测再次确认,这一结构不仅规模巨大,而且异常稳定,显示出巨行星大气在极端条件下也可能形成长期稳定的动力学结构。红外数据还显示,土星两极区域存在异常的光谱特征,可能与高空气溶胶层或极光活动有关。同时在红外波段下,主要由水冰构成的土星光环表现出更强的反射特性,一些细微环结构也得以更加清晰地呈现。

  10. 法官裁决广告商抵制 X 完全合法

    Elon Musk 在收购 Twitter/X 之后采取的一系列措施让广告商选择远离该平台,而 Musk 的回应是起诉广告商以及媒体组织 Media Matters for America。对媒体的诉讼还在进行之中,本周他对广告商的诉讼则遭到法官的驳回。美国地区法官 Jane Boyle 驳回的理由是 Musk 未能提出有效的诉讼理由。Musk 声称广告商违反了反垄断法,串谋抵制 Twitter/X 平台损害消费者利益,因为平台收入减少影响到功能的改进。法官则认为他未提供任何事实证明消费者利益受损,广告商拒绝在 Twitter/X 平台购买广告完全合法,不存在违反反垄断法。

  11. 苹果向 FBI 提供用马甲邮箱发出匿名威胁的用户名字

    一位苹果用户在阅读了 FBI 局长 Kash Patel 动用政府资源派遣一整队人马为其女友 Alexis Wilkins 提供安保的新闻之后,使用 iCloud 隐藏真实邮箱的马甲邮箱功能向 Wilkins 发送匿名恐吓信。苹果向 FBI 交出了这名用户的真实名字。这位用户是在 2026 年 2 月 28 日使用马甲邮箱 peaty_terms_1o@icloud.com 发送了邮件,其真实名字是 Alden Ruml。数据显示他的账户生成了 134 个马甲邮箱。执法人员询问了 Ruml,他证实匿名信是其所发送。

  12. Ubuntu 26.04 LTS Beta 释出

    Ubuntu 26.04 LTS Beta 释出,v26.04 是一个长期支持版本,正式发布日期定为 4 月 23 日。Ubuntu 26.04 主要变化包括:Linux 7.0 kernel(还在开发中,即将在一两周内发布)、GNOME 50.0 桌面环境、Mesa 26.0 图形驱动、Python 3.14、GCC 15.2 以及一系列软件更新,等等。

  13. AI 如何削弱我们的判断力

    根据发表在《科学》期刊上的一项研究,为人际关系问题提供建议和支持的 AI 聊天机器人可能会通过明显谄媚的回答而悄然强化有害的信念。研究发现,在各种语境下,聊天机器人肯定人类用户的频率远超真人之间相互肯定的频率;由此产生的有害后果包括:用户更坚信自己正确且更不愿去修复人际关系。研究人员利用 Reddit 社区“AITA”中的帖子评估了 OpenAI、Anthropic、Google 等公司的 11 种先进且广泛使用的 AI 大模型;结果发现,这些系统对用户行为的肯定频率比真人高出 49%,即使是在涉及欺骗、伤害或违法的场景中也是如此。在两项后续的实验中,研究人员探讨了这类结果所导致的行为后果。研究结果显示,在涉及人际交往情境(尤其是冲突)时,与谄媚式 AI 互动的参与者会更坚信自己是正确的,并且即使仅经过一次互动,他们和解或承担责任的意愿也会降低。

  14. Mozilla 和 Mila 联合推进开源主权 AI

    AI 的未来应该属于全人类,不能局限于少数国家或公司。为了实现这一目标,AI 必须开放、值得信赖,且其构建方式应赋予个人、机构和国家真正的选择权。正因如此 Mozilla 宣布与加拿大魁北克人工智能研究所 Mila 建立战略合作伙伴关系,联合推进开源主权 AI。Mila 和 Mozilla 将合作开发相关技术和方法,减少对封闭系统的依赖,为透明度、问责制和共享创新创造更多空间。双方暂时还没有公布更多信息。

  15. Reddit 开始推出验证用户是否是人类的检查机制

    Reddit CEO 兼联合创始人 Steve Huffman 周三宣布开始推出验证用户是否是人类的检查机制,声称会以保护用户隐私作为首要原则,只是为了确认用户是人类而不是具体哪个人。Reddit 表示只有在检测到用户账号可疑时才会要求对该账号进行验证,不会要求网站所有用户进行验证。账号可疑的信息包括了撰写或发布内容的速度。 为了验证账户是否为人类所有,Reddit 将利用第三方工具如苹果、Google、YubiKey 的 passkeys,第三方生物识别服务如 Face ID 甚至 Sam Altman 的 World ID,或者在部分国家要求政府颁发的身份证件。