DIGEST · 2026-03-29

OrangeBot.AI Digest — 2026-03-29

87 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. ChatGPT Won't Let You Type Until Cloudflare Reads Your React State (www.buchodi.com)
  2. The Cognitive Dark Forest (ryelang.org)
  3. The "Vibe Coding" Wall of Shame (crackr.dev)
  4. C++26 is done ISO C++ standards meeting, Trip Report (herbsutter.com)
  5. Neovim 0.12.0 (github.com)
  6. Pretext: TypeScript library for multiline text measurement and layout (github.com)
  7. The bot situation on the internet is worse than you could imagine (gladeart.com)
  8. Voyager 1 runs on 69 KB of memory and an 8-track tape recorder (techfixated.com)
  9. Full network of clitoral nerves mapped out for first time (www.theguardian.com)
  10. Say No to Palantir in Europe (action.wemove.eu)
  11. Police used AI facial recognition to wrongly arrest TN woman for crimes in ND (www.cnn.com)
  12. LinkedIn uses 2.4 GB RAM across two tabs
  13. Miasma: A tool to trap AI web scrapers in an endless poison pit (github.com)
  14. Nitrile and latex gloves may cause overestimation of microplastics (news.umich.edu)
  15. What if AI doesn't need more RAM but better math? (adlrocha.substack.com)

GitHub Trending(12)

  1. luongnv89 / claude-howto
  2. microsoft / VibeVoice
  3. NousResearch / hermes-agent
  4. OpenBB-finance / OpenBB
  5. obra / superpowers
  6. thedotmack / claude-mem
  7. hacksider / Deep-Live-Cam
  8. mvanhorn / last30days-skill
  9. shareAI-lab / learn-claude-code
  10. fastfetch-cli / fastfetch
  11. moeru-ai / airi
  12. twentyhq / twenty

Product Hunt(15)

  1. Clico

    Every textbox, supercharged

  2. Google Search Live

    Interactive, multimodal conversation in AI Mode

  3. Dipshot

    Capture, Annotate & Export

  4. SUN

    Personalized AI audio lessons generated on demand

  5. Sheet Ninja

    Ship vibe-coded apps. Your data stays in Google Sheets.

  6. Genzi

    The social app built around music

  7. CodingPrep

    Open Source coding interview prep tool with AI interviewer

  8. Parallel Code

    Use Claude Code, Codex, and Gemini in parallel

  9. Peopling

    Practice difficult conversations before they happen

  10. Pensieve

    Full company context for every AI agent

  11. GuideYou

    Guidance for everyday technology

  12. Cline Kanban

    CLI-agnostic kanban for multi-agent orchestration

  13. DwellRecord

    Keep your home records all together

  14. Santana by Deep Softworks

    Performant, real-time data visualization in your terminal

  15. Bulk Exporter for Sora

    1-click backup for your Sora videos, images & prompts.

Hugging Face(15)

  1. PixelSmile: Toward Fine-Grained Facial Expression Editing

    Fine-grained facial expression editing has long been limited by intrinsic semantic overlap. To address this, we construct the Flex Facial Expression (FFE) dataset with continuous affective annotations and establish FFE-Bench to evaluate structural confusion, editing accuracy, linear controllability, and the trade-off between expression editing and identity preservation. We propose PixelSmile, a diffusion framework that disentangles expression semantics via fully symmetric joint training. PixelSmile combines intensity supervision with contrastive learning to produce stronger and more distinguishable expressions, achieving precise and stable linear expression control through textual latent interpolation. Extensive experiments demonstrate that PixelSmile achieves superior disentanglement and robust identity preservation, confirming its effectiveness for continuous, controllable, and fine-grained expression editing, while naturally supporting smooth expression blending.

  2. Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale

    We introduce Intern-S1-Pro, the first one-trillion-parameter scientific multimodal foundation model. Scaling to this unprecedented size, the model delivers a comprehensive enhancement across both general and scientific domains. Beyond stronger reasoning and image-text understanding capabilities, its intelligence is augmented with advanced agent capabilities. Simultaneously, its scientific expertise has been vastly expanded to master over 100 specialized tasks across critical science fields, including chemistry, materials, life sciences, and earth sciences. Achieving this massive scale is made possible by the robust infrastructure support of XTuner and LMDeploy, which facilitates highly efficient Reinforcement Learning (RL) training at the 1-trillion parameter level while ensuring strict precision consistency between training and inference. By seamlessly integrating these advancements, Intern-S1-Pro further fortifies the fusion of general and specialized intelligence, working as a Specializable Generalist, demonstrating its position in the top tier of open-source models for general capabilities, while outperforming proprietary models in the depth of specialized scientific tasks.

  3. Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration

    In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just ~100 parameters. Experimental results reveal that despite its lightweight design, Calibri consistently improves performance across various text-to-image models. Notably, Calibri also reduces the inference steps required for image generation, all while maintaining high-quality outputs.

  4. RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models

    Image restoration under real-world degradations is critical for downstream tasks such as autonomous driving and object detection. However, existing restoration models are often limited by the scale and distribution of their training data, resulting in poor generalization to real-world scenarios. Recently, large-scale image editing models have shown strong generalization ability in restoration tasks, especially for closed-source models like Nano Banana Pro, which can restore images while preserving consistency. Nevertheless, achieving such performance with those large universal models requires substantial data and computational costs. To address this issue, we construct a large-scale dataset covering nine common real-world degradation types and train a state-of-the-art open-source model to narrow the gap with closed-source alternatives. Furthermore, we introduce RealIR-Bench, which contains 464 real-world degraded images and tailored evaluation metrics focusing on degradation removal and consistency preservation. Extensive experiments demonstrate our model ranks first among open-source methods, achieving state-of-the-art performance.

  5. Voxtral TTS

    We introduce Voxtral TTS, an expressive multilingual text-to-speech model that generates natural speech from as little as 3 seconds of reference audio. Voxtral TTS adopts a hybrid architecture that combines auto-regressive generation of semantic speech tokens with flow-matching for acoustic tokens. These tokens are encoded and decoded with Voxtral Codec, a speech tokenizer trained from scratch with a hybrid VQ-FSQ quantization scheme. In human evaluations conducted by native speakers, Voxtral TTS is preferred for multilingual voice cloning due to its naturalness and expressivity, achieving a 68.4\% win rate over ElevenLabs Flash v2.5. We release the model weights under a CC BY-NC license.

  6. MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens

    Long-term memory is a cornerstone of human intelligence. Enabling AI to process lifetime-scale information remains a long-standing pursuit in the field. Due to the constraints of full-attention architectures, the effective context length of large language models (LLMs) is typically limited to 1M tokens. Existing approaches, such as hybrid linear attention, fixed-size memory states (e.g., RNNs), and external storage methods like RAG or agent systems, attempt to extend this limit. However, they often suffer from severe precision degradation and rapidly increasing latency as context length grows, an inability to dynamically modify memory content, or a lack of end-to-end optimization. These bottlenecks impede complex scenarios like large-corpus summarization, Digital Twins, and long-history agent reasoning, while limiting memory capacity and slowing inference. We present Memory Sparse Attention (MSA), an end-to-end trainable, efficient, and massively scalable memory model framework. Through core innovations including scalable sparse attention and document-wise RoPE, MSA achieves linear complexity in both training and inference while maintaining exceptional stability, exhibiting less than 9% degradation when scaling from 16K to 100M tokens. Furthermore, KV cache compression, combined with Memory Parallel, enables 100M-token inference on 2xA800 GPUs. We also propose Memory Interleaving to facilitate complex multi-hop reasoning across scattered memory segments. MSA significantly surpasses frontier LLMs, state-of-the-art RAG systems, and leading memory agents in long-context benchmarks. These results demonstrate that by decoupling memory capacity from reasoning, MSA provides a scalable foundation to endow general-purpose models with intrinsic, lifetime-scale memory.

  7. MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data

    Generating images conditioned on multiple visual references is critical for real-world applications such as multi-subject composition, narrative illustration, and novel view synthesis, yet current models suffer from severe performance degradation as the number of input references grows. We identify the root cause as a fundamental data bottleneck: existing datasets are dominated by single- or few-reference pairs and lack the structured, long-context supervision needed to learn dense inter-reference dependencies. To address this, we introduce MacroData, a large-scale dataset of 400K samples, each containing up to 10 reference images, systematically organized across four complementary dimensions -- Customization, Illustration, Spatial reasoning, and Temporal dynamics -- to provide comprehensive coverage of the multi-reference generation space. Recognizing the concurrent absence of standardized evaluation protocols, we further propose MacroBench, a benchmark of 4,000 samples that assesses generative coherence across graded task dimensions and input scales. Extensive experiments show that fine-tuning on MacroData yields substantial improvements in multi-reference generation, and ablation studies further reveal synergistic benefits of cross-task co-training and effective strategies for handling long-context complexity. The dataset and benchmark will be publicly released.

  8. SlopCodeBench: Benchmarking How Coding Agents Degrade Over Long-Horizon Iterative Tasks

    Software development is iterative, yet agentic coding benchmarks overwhelmingly evaluate single-shot solutions against complete specifications. Code can pass the test suite but become progressively harder to extend. Recent iterative benchmarks attempt to close this gap, but constrain the agent's design decisions too tightly to faithfully measure how code quality shapes future extensions. We introduce SlopCodeBench, a language-agnostic benchmark comprising 20 problems and 93 checkpoints, in which agents repeatedly extend their own prior solutions under evolving specifications that force architectural decisions without prescribing internal structure. We track two trajectory-level quality signals: verbosity, the fraction of redundant or duplicated code, and structural erosion, the share of complexity mass concentrated in high-complexity functions. No agent solves any problem end-to-end across 11 models; the highest checkpoint solve rate is 17.2%. Quality degrades steadily: erosion rises in 80% of trajectories and verbosity in 89.8%. Against 48 open-source Python repositories, agent code is 2.2x more verbose and markedly more eroded. Tracking 20 of those repositories over time shows that human code stays flat, while agent code deteriorates with each iteration. A prompt-intervention study shows that initial quality can be improved, but it does not halt degradation. These results demonstrate that pass-rate benchmarks systematically undermeasure extension robustness, and that current agents lack the design discipline iterative software development demands.

  9. AVControl: Efficient Framework for Training Audio-Visual Controls

    Controlling video and audio generation requires diverse modalities, from depth and pose to camera trajectories and audio transformations, yet existing approaches either train a single monolithic model for a fixed set of controls or introduce costly architectural changes for each new modality. We introduce AVControl, a lightweight, extendable framework built on LTX-2, a joint audio-visual foundation model, where each control modality is trained as a separate LoRA on a parallel canvas that provides the reference signal as additional tokens in the attention layers, requiring no architectural changes beyond the LoRA adapters themselves. We show that simply extending image-based in-context methods to video fails for structural control, and that our parallel canvas approach resolves this. On the VACE Benchmark, we outperform all evaluated baselines on depth- and pose-guided generation, inpainting, and outpainting, and show competitive results on camera control and audio-visual benchmarks. Our framework supports a diverse set of independently trained modalities: spatially-aligned controls such as depth, pose, and edges, camera trajectory with intrinsics, sparse motion control, video editing, and, to our knowledge, the first modular audio-visual controls for a joint generation model. Our method is both compute- and data-efficient: each modality requires only a small dataset and converges within a few hundred to a few thousand training steps, a fraction of the budget of monolithic alternatives. We publicly release our code and trained LoRA checkpoints.

  10. VFIG: Vectorizing Complex Figures in SVG with Vision-Language Models

    Scalable Vector Graphics (SVG) are an essential format for technical illustration and digital design, offering precise resolution independence and flexible semantic editability. In practice, however, original vector source files are frequently lost or inaccessible, leaving only "flat" rasterized versions (e.g., PNG or JPEG) that are difficult to modify or scale. Manually reconstructing these figures is a prohibitively labor-intensive process, requiring specialized expertise to recover the original geometric intent. To bridge this gap, we propose VFIG, a family of Vision-Language Models trained for complex and high-fidelity figure-to-SVG conversion. While this task is inherently data-driven, existing datasets are typically small-scale and lack the complexity of professional diagrams. We address this by introducing VFIG-DATA, a large-scale dataset of 66K high-quality figure-SVG pairs, curated from a diverse mix of real-world paper figures and procedurally generated diagrams. Recognizing that SVGs are composed of recurring primitives and hierarchical local structures, we introduce a coarse-to-fine training curriculum that begins with supervised fine-tuning (SFT) to learn atomic primitives and transitions to reinforcement learning (RL) refinement to optimize global diagram fidelity, layout consistency, and topological edge cases. Finally, we introduce VFIG-BENCH, a comprehensive evaluation suite with novel metrics designed to measure the structural integrity of complex figures. VFIG achieves state-of-the-art performance among open-source models and performs on par with GPT-5.2, achieving a VLM-Judge score of 0.829 on VFIG-BENCH.

  11. Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting

    Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/

  12. Representation Alignment for Just Image Transformers is not Easier than You Think

    Representation Alignment (REPA) has emerged as a simple way to accelerate Diffusion Transformers training in latent space. At the same time, pixel-space diffusion transformers such as Just image Transformers (JiT) have attracted growing attention because they remove a dependency on a pretrained tokenizer, and then avoid the reconstruction bottleneck of latent diffusion. This paper shows that the REPA can fail for JiT. REPA yields worse FID for JiT as training proceeds and collapses diversity on image subsets that are tightly clustered in the representation space of pretrained semantic encoder on ImageNet. We trace the failure to an information asymmetry: denoising occurs in the high dimensional image space, while the semantic target is strongly compressed, making direct regression a shortcut objective. We propose PixelREPA, which transforms the alignment target and constrains alignment with a Masked Transformer Adapter that combines a shallow transformer adapter with partial token masking. PixelREPA improves both training convergence and final quality. PixelREPA reduces FID from 3.66 to 3.17 for JiT-B/16 and improves Inception Score (IS) from 275.1 to 284.6 on ImageNet 256 times 256, while achieving > 2times faster convergence. Finally, PixelREPA-H/16 achieves FID=1.81 and IS=317.2. Our code is available at https://github.com/kaist-cvml/PixelREPA.

  13. MuRF: Unlocking the Multi-Scale Potential of Vision Foundation Models

    Vision Foundation Models (VFMs) have become the cornerstone of modern computer vision, offering robust representations across a wide array of tasks. While recent advances allow these models to handle varying input sizes during training, inference typically remains restricted to a single, fixed scale. This prevalent single-scale paradigm overlooks a fundamental property of visual perception: varying resolutions offer complementary inductive biases, where low-resolution views excel at global semantic recognition and high-resolution views are essential for fine-grained refinement. In this work, we propose Multi-Resolution Fusion (MuRF), a simple yet universally effective strategy to harness this synergy at inference time. Instead of relying on a single view, MuRF constructs a unified representation by processing an image at multiple resolutions through a frozen VFM and fusing the resulting features. The universality of MuRF is its most compelling attribute. It is not tied to a specific architecture, serving instead as a fundamental, training-free enhancement to visual representation. We empirically validate this by applying MuRF to a broad spectrum of critical computer vision tasks across multiple distinct VFM families - primarily DINOv2, but also demonstrating successful generalization to contrastive models like SigLIP2.

  14. MemMA: Coordinating the Memory Cycle through Multi-Agent Reasoning and In-Situ Self-Evolution

    Memory-augmented LLM agents maintain external memory banks to support long-horizon interaction, yet most existing systems treat construction, retrieval, and utilization as isolated subroutines. This creates two coupled challenges: strategic blindness on the forward path of the memory cycle, where construction and retrieval are driven by local heuristics rather than explicit strategic reasoning, and sparse, delayed supervision on the backward path, where downstream failures rarely translate into direct repairs of the memory bank. To address these challenges, we propose MemMA, a plug-and-play multi-agent framework that coordinates the memory cycle along both the forward and backward paths. On the forward path, a Meta-Thinker produces structured guidance that steers a Memory Manager during construction and directs a Query Reasoner during iterative retrieval. On the backward path, MemMA introduces in-situ self-evolving memory construction, which synthesizes probe QA pairs, verifies the current memory, and converts failures into repair actions before the memory is finalized. Extensive experiments on LoCoMo show that MemMA consistently outperforms existing baselines across multiple LLM backbones and improves three different storage backends in a plug-and-play manner. Our code is publicly available at https://github.com/ventr1c/memma.

  15. FinMCP-Bench: Benchmarking LLM Agents for Real-World Financial Tool Use under the Model Context Protocol

    This paper introduces FinMCP-Bench, a novel benchmark for evaluating large language models (LLMs) in solving real-world financial problems through tool invocation of financial model context protocols. FinMCP-Bench contains 613 samples spanning 10 main scenarios and 33 sub-scenarios, featuring both real and synthetic user queries to ensure diversity and authenticity. It incorporates 65 real financial MCPs and three types of samples, single tool, multi-tool, and multi-turn, allowing evaluation of models across different levels of task complexity. Using this benchmark, we systematically assess a range of mainstream LLMs and propose metrics that explicitly measure tool invocation accuracy and reasoning capabilities. FinMCP-Bench provides a standardized, practical, and challenging testbed for advancing research on financial LLM agents.

Techmeme(15)

  1. Pro-AI group Innovation Council Action, praised by David Sacks, plans to spend $100M+ in the US midterms to drive deregulation and support Trump's AI agenda (Alex Isenstadt/Axios)

    Alex Isenstadt / Axios : Pro-AI group Innovation Council Action, praised by David Sacks, plans to spend $100M+ in the US midterms to drive deregulation and support Trump's AI agenda —  A new pro-AI political operation is jumping into this year's midterms with a plan to spend more than $100 million …

  2. A look at Coinbase One and other insurance-like plans for crypto users that typically exclude coverage for many kinds of account hacks, including phishing scams (Bloomberg)

    Bloomberg : A look at Coinbase One and other insurance-like plans for crypto users that typically exclude coverage for many kinds of account hacks, including phishing scams —  When Matthew Allan realized nearly $100,000 in Bitcoin was missing from his Coinbase account, he wasn't too worried.

  3. Bluesky's CEO talks about Attie, a new agentic social app built on Bluesky's AT Protocol that uses Claude and lets users build custom feeds (Sarah Perez/TechCrunch)

    Sarah Perez / TechCrunch : Bluesky's CEO talks about Attie, a new agentic social app built on Bluesky's AT Protocol that uses Claude and lets users build custom feeds —  The team from Bluesky has built another app — and this time, it's not a social network, but an AI assistant that allows you to design your own algorithm …

  4. Hong Kong-listed AI drug discovery company Insilico Medicine and Eli Lilly sign a drug co-development deal worth up to $2.75B, with $115M in upfront payments (Evelyn Cheng/CNBC)

    Evelyn Cheng / CNBC : Hong Kong-listed AI drug discovery company Insilico Medicine and Eli Lilly sign a drug co-development deal worth up to $2.75B, with $115M in upfront payments —  BEIJING — U.S. pharmaceutical giant Eli Lilly has reached a $2.75 billion deal to bring drugs developed using artificial intelligence …

  5. An AI-generated TikTok parody of reality series Love Island, called Fruit Love Island, averaged 10M+ views across its first 21 episodes after debuting last week (Isabelle Bousquette/Wall Street Journal)

    Isabelle Bousquette / Wall Street Journal : An AI-generated TikTok parody of reality series Love Island, called Fruit Love Island, averaged 10M+ views across its first 21 episodes after debuting last week —  ‘Fruit Love Island’ averages over 10 million views for each of its episodes  —  A new viral dating show featuring sexy …

  6. Q&A with YouTube CEO Neal Mohan on the platform's dominance, its impact on kids, the suspension and reinstatement of Trump's YouTube account, AI slop, and more (Lulu Garcia-Navarro/New York Times)

    Lulu Garcia-Navarro / New York Times : Q&A with YouTube CEO Neal Mohan on the platform's dominance, its impact on kids, the suspension and reinstatement of Trump's YouTube account, AI slop, and more —  YouTube is now the leading way Americans watch video.  Its audience is young; an astonishing 90 percent of American teenagers are on the platform.

  7. Analysis: while social media rewards sensationalism and inflammatory content, LLMs guide people away from extreme positions and towards expert-aligned stances (John Burn-Murdoch/Financial Times)

    John Burn-Murdoch / Financial Times : Analysis: while social media rewards sensationalism and inflammatory content, LLMs guide people away from extreme positions and towards expert-aligned stances —  Large language models elevate expert consensus and moderate views, in sharp contrast to social platforms © FT montage/Getty Images …

  8. A study of 11 leading LLMs finds the models more agreeable than humans when giving interpersonal advice, affirming users' behavior even when harmful or illegal (Stanford University)

    Stanford University : A study of 11 leading LLMs finds the models more agreeable than humans when giving interpersonal advice, affirming users' behavior even when harmful or illegal —  What does it mean to be reasonable?  —  PreferencesShow me... Faculty/Staff Student  —  Along with Stanford news and stories, show me:

  9. Vinod Khosla says AI is accelerating a shift of wealth and power away from workers, and an income tax overhaul in the US could offset voter fears about job loss (Financial Times)

    Financial Times : Vinod Khosla says AI is accelerating a shift of wealth and power away from workers, and an income tax overhaul in the US could offset voter fears about job loss —  Vinod Khosla says voter fears over technology causing job losses will shape upcoming US elections

  10. Qualified Health, which helps health systems evaluate and adopt AI tools, raised a $125M Series B led by NEA at a valuation of between $500M and $1B (Heather Landi/Fierce Healthcare)

    Heather Landi / Fierce Healthcare : Qualified Health, which helps health systems evaluate and adopt AI tools, raised a $125M Series B led by NEA at a valuation of between $500M and $1B —  Qualified Health Artificial Intelligence New Enterprise Associates SignalFire  —  Qualified Health, a startup that works with health systems …

  11. A look at why Dotcom Bubble comparisons to the AI boom are off, vertical SaaS is up +3% last 12 months vs. horizontal SaaS down 35%, and other reflections on AI (Logan Bartlett/@loganbartlett)

    Logan Bartlett / @loganbartlett : A look at why Dotcom Bubble comparisons to the AI boom are off, vertical SaaS is up +3% last 12 months vs. horizontal SaaS down 35%, and other reflections on AI —  This week I co-wrote Redpoint's 2026 Market Update for our Limited Partners with my colleagues @AdilBhatia and @lydianday.

  12. A profile of Mark Lanier, a TX lawyer and part-time pastor who beat Meta and Google in the LA social media case and said Zuckerberg was "rattled" on the stand (Wall Street Journal)

    Wall Street Journal : A profile of Mark Lanier, a TX lawyer and part-time pastor who beat Meta and Google in the LA social media case and said Zuckerberg was “rattled” on the stand —  Plaintiff's attorney Mark Lanier uses props and parables to challenge social-media giants, drugmakers and manufacturers …

  13. Report analyzing payments of 28M US consumers shows Claude adding paid subs at a steadily increasing pace; Anthropic: paid subs have more than doubled this year (Julie Bort/TechCrunch)

    Julie Bort / TechCrunch : Report analyzing payments of 28M US consumers shows Claude adding paid subs at a steadily increasing pace; Anthropic: paid subs have more than doubled this year —  Whatever the final outcome for Anthropic from its feud with the Department of Defense, the attention it has generated …

  14. ShinyHunters says it stole 350GB+ of data in a cyberattack on the European Commission, detected on March 24; the EC says its internal systems were not affected (Pierluigi Paganini/Security Affairs)

    Pierluigi Paganini / Security Affairs : ShinyHunters says it stole 350GB+ of data in a cyberattack on the European Commission, detected on March 24; the EC says its internal systems were not affected —  The European Commission has allegedly been breached by ShinyHunters, with reported data dumps including content from mail servers.

  15. Sources: DHS clears seven CISA staffers of wrongdoing; the staffers had been accused of misleading CISA's former acting director into taking a polygraph test (John Sakellariadis/Politico)

    John Sakellariadis / Politico : Sources: DHS clears seven CISA staffers of wrongdoing; the staffers had been accused of misleading CISA's former acting director into taking a polygraph test —  Former DHS spokesperson Tricia McLaughlin previously told POLITICO the staffers were under investigation for “misleading” …

Solidot(15)

  1. Google TurboQuant AI 压缩算法大幅减少大模型内存使用

    Google 研究院发布了压缩算法 TurboQuant,能在大幅减少大模型内存占用的同时提高速度和维持精度。TurboQuant 旨在减小键值缓存的大小,被称为是储存重要信息减少再计算的“数字查找表(digital cheat sheet)”。大模型并不理解任何东西,它通过映射词元文本语义的向量去模拟对事物的理解。大模型的向量通常使用 XYZ 坐标进行编码,而实现 TurboQuant 压缩的系统将向量转换为笛卡尔坐标系的极坐标,向量被简化为两类信息:半径(核心数据强度)和方向(数据含义)。如果使用 XYZ 坐标编码向量,那么特定位置可以编码为“向东走 3 个街区,向北走 4 个街区”,采用笛卡尔坐标编码向量,那么同样的信息编码为“沿 37 度方向走 5 个街区” ,简化了空间节省了计算。Google 的早期测试显示,TurboQuant 在部分测试中实现了 8 倍的性能提升,内存占用减少到原来的六分之一,同时质量没有损失。实现 TurboQuant 算法将有助于降低 AI 模型的运行成本和内存占用,但也可能推动更复杂模型的出现,因此对降低内存价格可能没有什么效果。

  2. 奥地利计划禁止 14 岁以下儿童使用社媒

    在澳大利亚、丹麦、马来西亚和挪威之后,奥地利也计划严格限制 14 岁以下青少年使用社交媒体,理由是令人上瘾的算法以及对儿童有害的内容。政府计划在 6 月底前完成立法草案,执法和年龄验证细节尚未最终敲定。社会民主党的副总理 Andreas Babler 表示,政府不能袖手旁观,任由社交媒体平台让儿童上瘾以及令儿童受到伤害,他称应该像对待酒精或烟草那样对待社交媒体。

  3. 太空中的精子会像失控宇航员那样翻滚

    根据发表在《Communications Biology》期刊上的一项研究,太空中的精子会像失控宇航员那样翻滚,找不到通向卵子的路径。研究人员使用了 3D 旋转器模拟微重力环境,将人类、小鼠和猪的精子样本放置到一个模拟女性生殖道的迷宫,出于伦理方面的考虑,迷宫并没有真的放置卵子。相比对照组,暴露在微重力环境下的人类精子成功穿过迷宫的数量减少了约 40%。添加孕酮有助于克服精子的方位障碍,研究人员认为这是因为卵子也会释放孕酮,而孕酮有帮助引导精子。

  4. Windows PC 崩溃的频率三倍于苹果 Mac

    根据 Omnissa 公司汇总 2025 年全世界零售、医疗保健、金融、教育、政府等行业客户遥测数据后发表的报告《2026 State of Digital Workspace》,Windows PC 的崩溃频率远高于 Mac。报告发现,Windows 设备被迫关机的频率是 Mac 的 3.1 倍。Windows 应用无响应的频率是 macOS 应用的 7.5 倍,需要重启的频率是 macOS 的三倍。医疗保健和制药行业逾半数 Windows 和 Android 设备落后于最新操作系统版本五个大版本,很可能导致这些设备更容易受到恶意软件的攻击,出现 bug 的频率也更高。教育行业逾半数台式机和移动设备未加密,学生的隐私可能更容易泄漏。Mac 电脑使用寿命更长,平均五年更换一次,Windows PC 平均三年更换一次。Mac 电脑 M 系列芯片平均温度为 40.1 摄氏度,而英特尔处理器平均温度为 65.2 摄氏度。

  5. “黑暗”比光速更快

    根据发表在《自然》期刊上的一项研究,对光波中“暗点”的直接测量证实,暗点比光速更快。所谓“暗点”指的是波结构中被称为“漩涡”的微小孔洞,这种孔洞在海浪、气流甚至咖啡中都十分常见。1970 年代就有人预测漩涡的移动速度比形成它的波更快。以色列理工学院的研究团队通过实验证实了这一预测。爱因斯坦相对论已经证明真空光速是速度的极限,但相对论适用的是有质量的物质和传输能量或信息的信号。光波的涡旋没有质量,也不携带能量或信息,因此没有违反相对论。光波的涡旋是光波中的“零点”,振幅降至零的位置,是光场中完全黑暗的点。

  6. 金·斯坦利·罗宾森称殖民火星是毫无意义的

    美国科幻作家金·斯坦利·罗宾森(Kim Stanley Robinson)以火星殖民以及地球化改造的三部曲——《红火星》,《绿火星》以及《蓝火星》——著称,他的《红火星》将火星殖民之旅设定在 2026 年。罗宾森在 New Scientist 上发表文章回顾了他的《红火星》以及过去几十年对火星的更深入了解,他认为目前殖民火星是没有任何意义的。《红火星》开始创作的时间是 1989-1990 年,于 1992 年出版,35 年后罗宾森称,殖民火星比以前认为的要困难得多。火星漫游车在本世纪初发现火星表面土壤掺杂着浓度百分之一的高氯酸盐,高氯酸盐在浓度百万分之一时对人类就是有毒的,火星表面对人类剧毒无比。火星更轻的重力以及无阻挡太空辐射都会给人体造成伤害。火星更适合短期的科考而不是长期的殖民。更重要的是人类需要先解决在地球上制造的问题之后去另外一颗星球才有意义。罗宾森表示他对于任何殖民火星的言论都嗤之以鼻,亿万富翁们关于殖民火星的豪言壮语都纯属幻想。

  7. 长期吸入硅尘会导致矿工肺功能不可逆转的下降

    矿工在工作期间吸入结晶硅的粉尘而导致矽肺病已是众所周知。发表在《Journal of Occupational Medicine and Toxicology》期刊上的一项研究识别了可能导致矽肺病的硅尘浓度临界点。德国研究人员分析了 1970-1991 年间 1418 名矿工的 7116 次肺功能测定数据,识别了肺功能开始加速且可能不可逆转下降的临界点:如果长期暴露在每立方米超过 0.09 mg 的二氧化硅粉尘下,肺功能会加速下降,低于该浓度不会发生有害积累。

  8. PS5 各型号普涨 100 美元以上

    因内存和 SSD 价格过去几个月以一年前无法想象的速度暴涨,索尼正式宣布各型号 PS5 普涨 100 美元/100 欧元以上。PS5 标准版美版从 550 美元涨至 650 美元,数字版从 500 美元涨至 600 美元,PS5 Pro 从 750 美元涨至 900 美元。英国、欧洲以及日本的价格都有类似的大幅度上涨。

  9. Spotify 寻求安娜的档案赔偿 3 亿美元

    Spotify 和唱片公司请求法庭对影子图书馆安娜的档案(Anna’s Archive)做出 3.22 亿美元的缺席判决。安娜的档案至今未回应针对它的诉讼。Spotify 等还寻求永久禁令,试图切断安娜的档案与域名和托管服务商之间的合作,将该网站从互联网上彻底清除。Spotify 和唱片公司去年底提起的诉讼已经导致安娜的档案失去了.org 等主域名和备用域名,但并没有让安娜的档案彻底消失,只是给它带来了些不便,迫使它不断更换域名和托管商。在最新递交到法庭的文件中,Spotify 和唱片公司要求安娜的档案向 Spotify 赔偿 3 亿美元,向索尼赔偿 750 万美元,向环球唱片(UMG)赔偿 750 万美元,向华纳赔偿 720 万美元。

  10. 哈勃韦伯联合观测土星

    科学家利用哈勃望远镜的可见光以及韦伯的红外光联合观测了土星,获得了至今最全面的土星图像。哈勃捕捉到了土星云层表面的颜色变化和风暴形态;韦伯则通过红外观测揭示了不同高度大气中的云层和化学成分分布。最新图像显示,土星北半球中纬度地区存在一条被称为“带状波”的长寿命急流结构,其形态可能受到深层大气波动的影响。在其附近,科学家还识别出 2010—2012 年土星“春季大风暴”的残余结构。此外,南半球还可见多个风暴系统,显示出土星大气的高度活跃性。科学家还在图像中再次观测到土星北极著名的六边形结构。这个由喷流构成的规则几何图形早在 1981 年就被探测器发现,几十年来始终存在,被认为是太阳系最奇特的天气现象之一。新的观测再次确认,这一结构不仅规模巨大,而且异常稳定,显示出巨行星大气在极端条件下也可能形成长期稳定的动力学结构。红外数据还显示,土星两极区域存在异常的光谱特征,可能与高空气溶胶层或极光活动有关。同时在红外波段下,主要由水冰构成的土星光环表现出更强的反射特性,一些细微环结构也得以更加清晰地呈现。

  11. 法官裁决广告商抵制 X 完全合法

    Elon Musk 在收购 Twitter/X 之后采取的一系列措施让广告商选择远离该平台,而 Musk 的回应是起诉广告商以及媒体组织 Media Matters for America。对媒体的诉讼还在进行之中,本周他对广告商的诉讼则遭到法官的驳回。美国地区法官 Jane Boyle 驳回的理由是 Musk 未能提出有效的诉讼理由。Musk 声称广告商违反了反垄断法,串谋抵制 Twitter/X 平台损害消费者利益,因为平台收入减少影响到功能的改进。法官则认为他未提供任何事实证明消费者利益受损,广告商拒绝在 Twitter/X 平台购买广告完全合法,不存在违反反垄断法。

  12. 苹果向 FBI 提供用马甲邮箱发出匿名威胁的用户名字

    一位苹果用户在阅读了 FBI 局长 Kash Patel 动用政府资源派遣一整队人马为其女友 Alexis Wilkins 提供安保的新闻之后,使用 iCloud 隐藏真实邮箱的马甲邮箱功能向 Wilkins 发送匿名恐吓信。苹果向 FBI 交出了这名用户的真实名字。这位用户是在 2026 年 2 月 28 日使用马甲邮箱 peaty_terms_1o@icloud.com 发送了邮件,其真实名字是 Alden Ruml。数据显示他的账户生成了 134 个马甲邮箱。执法人员询问了 Ruml,他证实匿名信是其所发送。

  13. Ubuntu 26.04 LTS Beta 释出

    Ubuntu 26.04 LTS Beta 释出,v26.04 是一个长期支持版本,正式发布日期定为 4 月 23 日。Ubuntu 26.04 主要变化包括:Linux 7.0 kernel(还在开发中,即将在一两周内发布)、GNOME 50.0 桌面环境、Mesa 26.0 图形驱动、Python 3.14、GCC 15.2 以及一系列软件更新,等等。

  14. AI 如何削弱我们的判断力

    根据发表在《科学》期刊上的一项研究,为人际关系问题提供建议和支持的 AI 聊天机器人可能会通过明显谄媚的回答而悄然强化有害的信念。研究发现,在各种语境下,聊天机器人肯定人类用户的频率远超真人之间相互肯定的频率;由此产生的有害后果包括:用户更坚信自己正确且更不愿去修复人际关系。研究人员利用 Reddit 社区“AITA”中的帖子评估了 OpenAI、Anthropic、Google 等公司的 11 种先进且广泛使用的 AI 大模型;结果发现,这些系统对用户行为的肯定频率比真人高出 49%,即使是在涉及欺骗、伤害或违法的场景中也是如此。在两项后续的实验中,研究人员探讨了这类结果所导致的行为后果。研究结果显示,在涉及人际交往情境(尤其是冲突)时,与谄媚式 AI 互动的参与者会更坚信自己是正确的,并且即使仅经过一次互动,他们和解或承担责任的意愿也会降低。

  15. Mozilla 和 Mila 联合推进开源主权 AI

    AI 的未来应该属于全人类,不能局限于少数国家或公司。为了实现这一目标,AI 必须开放、值得信赖,且其构建方式应赋予个人、机构和国家真正的选择权。正因如此 Mozilla 宣布与加拿大魁北克人工智能研究所 Mila 建立战略合作伙伴关系,联合推进开源主权 AI。Mila 和 Mozilla 将合作开发相关技术和方法,减少对封闭系统的依赖,为透明度、问责制和共享创新创造更多空间。双方暂时还没有公布更多信息。