TEXT VIEW · TODAY'S DIGEST · 36 HEADLINES ACROSS 8 SOURCES

Startup Archive(0)

No items yet for today.

App Store Rankings(0)

No items yet for today.

ISSUE 0855
MON, MAY 4, 2026
Discover the best information organized by OrangeBot.AI
TODAY · MON, MAY 4, 2026

The web,
read by a bot.

Ten sources — Hacker News, Product Hunt, HuggingFace, Techmeme and more — filtered, tagged, and summarized every morning for builders who don’t have time to scroll.

NEWChrome extension: save posts from Twitter/X in one click.Install →
01

AI DIGEST

UPDATED DAILY · EDITOR'S PICK
01.00
AI DIGEST

AI新闻摘要

May 4, 2026

Of course. Here is a summary of today's key news events based on the information you provided.


Today's News Overview

GameStop Makes Takeover Bid for eBay Video game retailer GameStop, led by CEO Ryan Cohen, has made a bid to acquire the online resale giant eBay. The offer is for $125 a share in cash and stock, signaling a major strategic move to expand beyond its core business.

U.S. Denies Report of Attack on Navy Ship in Middle East The U.S. government has officially denied reports that an American warship was struck near the Strait of Hormuz. The initial (and now refuted) claim caused a temporary spike in oil prices, highlighting market sensitivity to geopolitical tensions in the region.

AI Firm Anthropic Secures Major Investment AI startup Anthropic is set to receive significant funding, with investors including Blackstone, Hellman & Friedman, and Goldman Sachs each expected to contribute around $300 million. The deal underscores strong investor confidence in the artificial intelligence sector.

Home Foreclosures Rise Following Changes to Mortgage Rules A recent analysis shows a noticeable increase in home foreclosures. This trend is being attributed as an early consequence of tightened rules for federal mortgage subsidies that were implemented during the previous administration.

American Express Global Business Travel to Be Taken Private in $6.3B Deal Investment firm Long Lake Management is acquiring American Express Global Business Travel in an all-cash deal valued at approximately $6.3 billion. The acquisition will take the corporate travel management company private.

Hubbell to Acquire NSI Industries for $3 Billion In a major industrial deal, Hubbell has agreed to purchase NSI Industries for $3 billion. The acquisition is intended to expand Hubbell's product offerings for its electrical and utility customers, strengthening its position in the critical infrastructure market.

Chip Startup Cerebras Systems Plans $3.5 Billion IPO AI chip startup Cerebras Systems announced its planned Initial Public Offering (IPO), aiming to raise up to $3.5 billion. The company will offer 28 million shares, signaling continued momentum in the semiconductor and AI hardware markets.

Hong Kong Hedge Fund Manager Faces Insider Trading Charges Simon Sadler, founder of the prominent Asia-based hedge fund Segantii Capital, is facing criminal charges in Hong Kong. The charges are related to alleged insider trading of shares in the clothing retailer Esprit.

02

ON THE WIRE

6 SOURCES
02

HACKER NEWS

02.00
HACKER NEWS

Hacker News - May 4, 2026

Hacker News Feed: Highlighting key posts and discussions.

Humanoid Robot Actuators

(www.firgelli.com)

15672
Let's Buy Spirit Air

(letsbuyspiritair.com)

434414
Why TUIs are back

(wiki.alcidesfonseca.com)

377381
Southwest Headquarters Tour

(katherinemichel.github.io)

28184
03

HUGGINGFACE

03.00
HUGGINGFACE

huggingface.title - May 4, 2026

huggingface.description

UniVidX: A Unified Multimodal Framework for Versatile Video Generation via Diffusion Priors

Recent progress has shown that video diffusion models (VDMs) can be repurposed for diverse multimodal graphics tasks. However, existing methods often train separate models for each problem setting, which fixes the input-output mapping and limits the modeling of correlations across modalities. We present UniVidX, a unified multimodal framework that leverages VDM priors for versatile video generation. UniVidX formulates pixel-aligned tasks as conditional generation in a shared multimodal space, adapts to modality-specific distributions while preserving the backbone's native priors, and promotes cross-modal consistency during synthesis. It is built on three key designs. Stochastic Condition Masking (SCM) randomly partitions modalities into clean conditions and noisy targets during training, enabling omni-directional conditional generation instead of fixed mappings. Decoupled Gated LoRA (DGL) introduces per-modality LoRAs that are activated when a modality serves as the generation target, preserving the strong priors of the VDM. Cross-Modal Self-Attention (CMSA) shares keys and values across modalities while keeping modality-specific queries, facilitating information exchange and inter-modal alignment. We instantiate UniVidX in two domains: UniVid-Intrinsic, for RGB videos and intrinsic maps including albedo, irradiance, and normal; and UniVid-Alpha, for blended RGB videos and their constituent RGBA layers. Experiments show that both models achieve performance competitive with state-of-the-art methods across distinct tasks and generalize robustly to in-the-wild scenarios, even when trained on fewer than 1,000 videos. Project page: https://houyuanchen111.github.io/UniVidX.github.io/

64
Web2BigTable: A Bi-Level Multi-Agent LLM System for Internet-Scale Information Search and Extraction

Agentic web search increasingly faces two distinct demands: deep reasoning over a single target, and structured aggregation across many entities and heterogeneous sources. Current systems struggle on both fronts. Breadth-oriented tasks demand schema-aligned outputs with wide coverage and cross-entity consistency, while depth-oriented tasks require coherent reasoning over long, branching search trajectories. We introduce Web2BigTable, a multi-agent framework for web-to-table search that supports both regimes. Web2BigTable adopts a bi-level architecture in which an upper-level orchestrator decomposes the task into sub-problems and lower-level worker agents solve them in parallel. Through a closed-loop run--verify--reflect process, the framework jointly improves decomposition and execution over time via persistent, human-readable external memory, with self-evolving updates to each single-agent. During execution, workers coordinate through a shared workspace that makes partial findings visible, allowing them to reduce redundant exploration, reconcile conflicting evidence, and adapt to emerging coverage gaps. Web2BigTable sets a new state of the art on WideSearch, reaching an Avg@4 Success Rate of 38.50 (7.5times the second best at 5.10), Row F1 of 63.53 (+25.03 over the second best), and Item F1 of 80.12 (+14.42 over the second best). It also generalises to depth-oriented search on XBench-DeepSearch, achieving 73.0 accuracy. Code is available at https://github.com/web2bigtable/web2bigtable.

22
Map2World: Segment Map Conditioned Text to 3D World Generation

3D world generation is essential for applications such as immersive content creation or autonomous driving simulation. Recent advances in 3D world generation have shown promising results; however, these methods are constrained by grid layouts and suffer from inconsistencies in object scale throughout the entire world. In this work, we introduce a novel framework, Map2World, that first enables 3D world generation conditioned on user-defined segment maps of arbitrary shapes and scales, ensuring global-scale consistency and flexibility across expansive environments. To further enhance the quality, we propose a detail enhancer network that generates fine details of the world. The detail enhancer enables the addition of fine-grained details without compromising overall scene coherence by incorporating global structure information. We design the entire pipeline to leverage strong priors from asset generators, achieving robust generalization across diverse domains, even under limited training data for scene generation. Extensive experiments demonstrate that our method significantly outperforms existing approaches in user-controllability, scale consistency, and content coherence, enabling users to generate 3D worlds under more complex conditions.

9
Learning while Deploying: Fleet-Scale Reinforcement Learning for Generalist Robot Policies

Generalist robot policies increasingly benefit from large-scale pretraining, but offline data alone is insufficient for robust real-world deployment. Deployed robots encounter distribution shifts, long-tail failures, task variations, and human correction opportunities that fixed demonstration datasets cannot fully capture. We present Learning While Deploying (LWD), a fleet-scale offline-to-online reinforcement learning framework for continual post-training of generalist Vision-Language-Action (VLA) policies. Starting from a pretrained VLA policy, LWD closes the loop between deployment, shared physical experience, policy improvement, and redeployment by using autonomous rollouts and human interventions collected across a robot fleet. To stabilize learning from heterogeneous, sparse-reward fleet data, LWD combines Distributional Implicit Value Learning (DIVL) for robust value estimation with Q-learning via Adjoint Matching (QAM) for policy extraction in flow-based VLA action generators. We validate LWD on a fleet of 16 dual-arm robots across eight real-world manipulation tasks, including semantic grocery restocking and 3--5 minute long-horizon tasks. A single generalist policy improves as fleet experience accumulates, reaching an average success rate of 95%, with the largest gains on long-horizon tasks.

8
From Skill Text to Skill Structure: The Scheduling-Structural-Logical Representation for Agent Skills

LLM agents increasingly rely on reusable skills, capability packages that combine instructions, control flow, constraints, and tool calls. In most current agent systems, however, skills are still represented by text-heavy artifacts, including SKILL.md-style documents and structured records whose machine-usable evidence remains embedded largely in natural-language descriptions. This poses a challenge for skill-centered agent systems: managing skill collections and using skills to support agent both require reasoning over invocation interfaces, execution structure, and concrete side effects that are often entangled in a single textual surface. An explicit representation of skill knowledge may therefore help make these artifacts easier for machines to acquire and leverage. Drawing on Memory Organization Packets, Script Theory, and Conceptual Dependency from Schank and Abelson's classical work on linguistic knowledge representation, we introduce what is, to our knowledge, the first structured representation for agent skill artifacts that disentangles skill-level scheduling signals, scene-level execution structure, and logic-level action and resource-use evidence: the Scheduling-Structural-Logical (SSL) representation. We instantiate SSL with an LLM-based normalizer and evaluate it on a corpus of skills in two tasks, Skill Discovery and Risk Assessment, and superiorly outperform the text-only baselines: in Skill Discovery, SSL improves MRR from 0.573 to 0.707; in Risk Assessment, it improves macro F1 from 0.744 to 0.787. These findings reveal that explicit, source-grounded structure makes agent skills easier to search and review. They also suggest that SSL is best understood as a practical step toward more inspectable, reusable, and operationally actionable skill representations for agent systems, rather than as a finished standard or an end-to-end mechanism for managing and using skills.

7
Let ViT Speak: Generative Language-Image Pre-training

In this paper, we present Generative Language-Image Pre-training (GenLIP), a minimalist generative pretraining framework for Vision Transformers (ViTs) designed for multimodal large language models (MLLMs). To better align vision encoders with the autoregressive nature of LLMs, GenLIP trains a ViT to predict language tokens directly from visual tokens using a standard language modeling objective, without contrastive batch construction or an additional text decoder. This design offers three key advantages: (1) Simplicity: a single transformer jointly models visual and textual tokens; (2) Scalability: it scales effectively with both data and model size; and (3) Performance: it achieves competitive or superior results across diverse multimodal benchmarks. Trained on 8B samples from Recap-DataComp-1B, GenLIP matches or surpasses strong baselines despite using substantially less pretraining data. After continued pretraining on multi-resolution images at native aspect ratios, GenLIP further improves on detail-sensitive tasks such as OCR and chart understanding, making it a strong foundation for vision encoders in MLLMs.

5
When Do Diffusion Models learn to Generate Multiple Objects?

Text-to-image diffusion models achieve impressive visual fidelity, yet they remain unreliable in multi-object generation. Despite extensive empirical evidence of these failures, the underlying causes remain unclear. We begin by asking how much of this limitation arises from the data itself. To disentangle data effects, we consider two regimes across different dataset sizes: (1) concept generalization, where each individual concept is observed during training under potentially imbalanced data distributions, and (2) compositional generalization, where specific combinations of concepts are systematically held out. To study these regimes, we introduce mosaic (Multi-Object Spatial relations, AttrIbution, Counting), a controlled framework for dataset generation. By training diffusion models on mosaic, we find that scene complexity plays a dominant role rather than concept imbalance, and that counting is uniquely difficult to learn in low-data regimes. Moreover, compositional generalization collapses as more concept combinations are held out during training. These findings highlight fundamental limitations of diffusion models and motivate stronger inductive biases and data design for robust multi-object compositional generation.

2
Trees to Flows and Back: Unifying Decision Trees and Diffusion Models

Decision trees and diffusion models are ostensibly disparate model classes, one discrete and hierarchical, the other continuous and dynamic. This work unifies the two by establishing a crisp mathematical correspondence between hierarchical decision trees and diffusion processes in appropriate limiting regimes. Our unification reveals a shared optimization principle: Global Trajectory Score Matching (GTSM), for which gradient boosting (in an idealized version) is asymptotically optimal. We underscore the conceptual value of our work through two key practical instantiations: \treeflow, which achieves competitive generation quality on tabular data with higher fidelity and a 2\times computational speedup, and \dsmtree, a novel distillation method that transfers hierarchical decision logic into neural networks, matching teacher performance within 2\% on many benchmarks.

2
End-to-End Autoregressive Image Generation with 1D Semantic Tokenizer

Autoregressive image modeling relies on visual tokenizers to compress images into compact latent representations. We design an end-to-end training pipeline that jointly optimizes reconstruction and generation, enabling direct supervision from generation results to the tokenizer. This contrasts with prior two-stage approaches that train tokenizers and generative models separately. We further investigate leveraging vision foundation models to improve 1D tokenizers for autoregressive modeling. Our autoregressive generative model achieves strong empirical results, including a state-of-the-art FID score of 1.48 without guidance on ImageNet 256x256 generation.

2
Talker-T2AV: Joint Talking Audio-Video Generation with Autoregressive Diffusion Modeling

Joint audio-video generation models have shown that unified generation yields stronger cross-modal coherence than cascaded approaches. However, existing models couple modalities throughout denoising via pervasive attention, treating high-level semantics and low-level details in a fully entangled manner. This is suboptimal for talking head synthesis: while audio and facial motion are semantically correlated, their low-level realizations (acoustic signals and visual textures) follow distinct rendering processes. Enforcing joint modeling across all levels causes unnecessary entanglement and reduces efficiency. We propose Talker-T2AV, an autoregressive diffusion framework where high-level cross-modal modeling occurs in a shared backbone, while low-level refinement uses modality-specific decoders. A shared autoregressive language model jointly reasons over audio and video in a unified patch-level token space. Two lightweight diffusion transformer heads decode the hidden states into frame-level audio and video latents. Experiments on talking portrait benchmarks show Talker-T2AV outperforms dual-branch baselines in lip-sync accuracy, video quality, and audio quality, achieving stronger cross-modal consistency than cascaded pipelines.

1
Online Self-Calibration Against Hallucination in Vision-Language Models

Large Vision-Language Models (LVLMs) often suffer from hallucinations, generating descriptions that include visual details absent from the input image. Recent preference alignment methods typically rely on supervision distilled from stronger models such as GPT. However, this offline paradigm introduces a Supervision-Perception Mismatch: the student model is forced to align with fine-grained details beyond its perceptual capacity, learning to guess rather than to see. To obtain reliable self-supervision for online learning, we identify a Generative-Discriminative Gap within LVLMs, where models exhibit higher accuracy on discriminative verification than open-ended generation. Leveraging this capability, we propose Online Self-CAlibRation (OSCAR), a framework that integrates Monte Carlo Tree Search with a Dual-Granularity Reward Mechanism to construct preference data and iteratively refines the model via Direct Preference Optimization. Extensive experiments demonstrate that OSCAR achieves state-of-the-art performance on hallucination benchmarks while improving general multimodal capabilities.

1
Learning to Act and Cooperate for Distributed Black-Box Consensus Optimization

Distributed blackbox consensus optimization is a fundamental problem in multi-agent systems, where agents must improve a global objective using only local objective queries and limited neighbor communication. Existing methods largely rely on handcrafted update rules and static cooperation patterns, which often struggle to balance local adaptation, global coordination, and communication efficiency in heterogeneous nonconvex environments. In this paper, we take an initial step toward trajectory-driven self-design for distributed black-box consensus optimization. We first redesign the agent-level swarm dynamics with an adaptive internal mechanism tailored to decentralized consensus settings, improving the balance between exploration, convergence, and local escape. Built on top of this adaptive execution layer, we propose Learning to Act and Cooperate (LACMAS), a trajectorydriven framework in which large language models provide sparse highlevel guidance for shaping both agentinternal action behaviors and agentexternal cooperation patterns from historical optimization trajectories. We further introduce a phased cognitive scheduling strategy to activate different forms of adaptation in a resource-aware manner. Experiments on standard distributed black-box benchmarks and real-world distributed tasks show that LAC-MAS consistently improves solution quality, convergence efficiency, and communication efficiency over strong baselines, suggesting a practical route from handcrafted distributed coordination toward self-designing multi-agent optimization systems.

1
LASE: Language-Adversarial Speaker Encoding for Indic Cross-Script Identity Preservation

A speaker encoder used in multilingual voice cloning should treat the same speaker identically regardless of which script the audio was uttered in. Off-the-shelf encoders do not, and the failure is accent-conditional. On a 1043-pair Western-accented voice corpus across English, Hindi, Telugu, and Tamil, WavLM-base-plus-sv loses 0.082 absolute cosine similarity when the same voice changes script and ECAPA-TDNN loses 0.105. On a 1369-pair Indian-accented voice corpus, the gap shrinks to 0.006 (WavLM-SV) and 0.044 (ECAPA-TDNN). The leak is largest where it matters most for cross-script TTS: when a system projects a non-Indic-trained voice into Indic scripts. We present LASE (Language-Adversarial Speaker Encoder), a small projection head over frozen WavLM-base-plus trained with two losses: a supervised contrastive loss over voice identity, and a gradient-reversal cross-entropy against a 4-language classifier that pushes the embedding to be language-uninformative while remaining speaker-informative. Trained on 1118 quality-gated cross-script pairs synthesised from 8 commercial multilingual voices, LASE's residual gap is consistent with zero on both corpora (Delta = 0.013 Western, Delta = 0.026 Indian; both bootstrap 95% CIs include zero) and amplifies the cross-script-vs-floor margin 2.4-2.7x over both baselines. An ECAPA+GRL ablation shows the GRL objective improves either backbone but the WavLM choice contributes too. In synthetic multi-speaker diarisation, LASE matches ECAPA-TDNN on cross-script speaker recall (0.788 vs 0.789) with ~100x less training data. We release the r1 checkpoint, both corpora, and the bootstrap recipe.

1
AnalogRetriever: Learning Cross-Modal Representations for Analog Circuit Retrieval

Analog circuit design relies heavily on reusing existing intellectual property (IP), yet searching across heterogeneous representations such as SPICE netlists, schematics, and functional descriptions remains challenging. Existing methods are largely limited to exact matching within a single modality, failing to capture cross-modal semantic relationships. To bridge this gap, we present AnalogRetriever, a unified tri-modal retrieval framework for analog circuit search. We first build a high-quality dataset on top of Masala-CHAI through a two-stage repair pipeline that raises the netlist compile rate from 22\% to 100\%. Built on this foundation, AnalogRetriever encodes schematics and descriptions with a vision-language model and netlists with a port-aware relational graph convolutional network, mapping all three modalities into a shared embedding space via curriculum contrastive learning. Experiments show that AnalogRetriever achieves an average Recall@1 of 75.2\% across all six cross-modal retrieval directions, significantly outperforming existing baselines. When integrated into the AnalogCoder agentic framework as a retrieval-augmented generation module, it consistently improves functional pass rates and enables previously unsolved tasks to be completed. Our code and dataset will be released.

1
Themis: Training Robust Multilingual Code Reward Models for Flexible Multi-Criteria Scoring

Reward models (RMs) have become an indispensable fixture of the language model (LM) post-training playbook, enabling policy alignment and test-time scaling. Research on the application of RMs in code generation, however, has been comparatively sparse, with existing work largely focusing on execution feedback. This choice constrains post-training to optimizing functional correctness over self-contained executable code. In this work, we examine the training and evaluation of multilingual, multi-criteria code RMs. To this end, we first compile Themis-CodeRewardBench, a benchmark to evaluate code RMs across five preference dimensions (i.e., criteria) and eight programming languages, on which we profile 50+ code, math, and general-purpose RMs. Observing the limited proficiency of current RMs beyond scoring for functional correctness, we develop Themis-CodePreference, the largest open-source collection of code preferences to date (more than 350k preference pairs), and use it to train Themis-RM, a suite of multilingual code reward models for flexible multi-criteria scoring, ranging in size from 600M to 32B parameters. Our experiments and ablations demonstrate positive scaling trends, strong cross-lingual transfer when training on diverse preferences, and the importance of multi-criteria training for reliable code reward modeling.

0
05

PRODUCT HUNT

05.00
PRODUCT HUNT

Product Hunt - May 4, 2026

Product Hunt Daily Feed: Featuring noteworthy tech launches.

Mindra icon
Mindra

Agent Teams You Can Actually Delegate To

0
Visitor profiles and timeline by Croct icon
Visitor profiles and timeline by Croct

Uncover the story behind every click to optimize your site

0
Claude Code & Codex Usage Trading Cards by Rudel icon
Claude Code & Codex Usage Trading Cards by Rudel

Get your trading card based on your CC & codex usage

0
Regulus by Cumbuca icon
Regulus by Cumbuca

AI chatbot trained on Brazil's Central Bank regulations

0
Aaavatar icon
Aaavatar

Branded team headshots in one drop

0
Panels Store icon
Panels Store

Buy DRM-free comics and read them instantly in Panels

0
Flowly icon
Flowly

Your personal AI assistant, native to your desktop

0
Sleek Analytics for iOS icon
Sleek Analytics for iOS

Your website analytics in your pocket

0
Replyke V7 icon
Replyke V7

Pre-Modeled Infra & Client SDKs for User-Powered Products.

0
Dropy icon
Dropy

Track prices on stores like Amazon, eBay, & AliExpress

0
Codex Pets icon
Codex Pets

Animated companions for your Codex workflow

0
Manex icon
Manex

Preserve useful answers, corrections, and context as memory

0
Rosentic icon
Rosentic

Catch when coding agents break each other before merge

0
Huddle01 VMs icon
Huddle01 VMs

Virtual Machines for Your Agents

0
Radar icon
Radar

The missing open-source Kubernetes UI

0
Aximote In-Car App icon
Aximote In-Car App

The fitness tracker for your car

0
Mockin 2.0 icon
Mockin 2.0

Ultimate career toolkit for UX/UI & Product designers

0
PandaProbe icon
PandaProbe

open source agent engineering platform

0
Ara icon
Ara

Build an entire business by texting

0
Scholé icon
Scholé

Turn everyday work into personalized AI learning

0
Filect icon
Filect

Organize Your Files With AI

0
YouTube TV Custom Multiview icon
YouTube TV Custom Multiview

Mix and match up to 4 live streams at once

0
Cloud Computer by Manus icon
Cloud Computer by Manus

A dedicated cloud machine for bots and software

0
Feather icon
Feather

Photo editor with local AI

0
Microsoft Copilot Health icon
Microsoft Copilot Health

Dedicated space to bring your personal health data together

0
Breaks icon
Breaks

A quiet Pomodoro that lives in your menu bar.

0
Beauty Diagram icon
Beauty Diagram

Diagrams that don't look like they were auto-generated

0
Genspark for Word icon
Genspark for Word

Draft, edit, and research inside Microsoft Word with AI

0
Zush icon
Zush

Updated: docs support, BYOK, Local AI (Ollama), Windows App

0
AnyDrop icon
AnyDrop

AirDrop for the browser: share files, chat and sync notes

0
Marx Finance icon
Marx Finance

AI agents debate the markets

0
Bitgrain icon
Bitgrain

Design studio lighter than Figma & more flexible than Canva

0
Postiz icon
Postiz

Agentic social media scheduler for agents like OpenClaw

0
LaunchCut icon
LaunchCut

Interactive iOS Demo Builder

0
Zed 1.0 icon
Zed 1.0

High-performance, open source, multiplayer code editor

0
TrafficClaw icon
TrafficClaw

Have a conversation with your SEO & analytics data

0
HiveTerm icon
HiveTerm

One workspace for Claude, Codex, Gemini and your stack

0
Montage icon
Montage

The runtime framework for agentic user interfaces!

0
MUSIXQUARE icon
MUSIXQUARE

Turn any room into a surround system with your devices

0
nudge icon
nudge

Drop your tasks. AI auto-schedules your whole week.

0
ScreenVeil icon
ScreenVeil

Hide what shouldn’t be seen on your computer

0
CipherLock icon
CipherLock

Learn ciphers by breaking them

0
PeekFocus icon
PeekFocus

One keystroke blurs everything behind your active window

0
Ghosted icon
Ghosted

Pause media or lock your screen when you step away

0
Buda icon
Buda

Recruit agents to run your company as a synchronous team

0
Adoptly icon
Adoptly

Turn product releases into feature adoption

0
Mistral Medium 3.5 icon
Mistral Medium 3.5

A 128B model for coding, reasoning, and long tasks

0
Basedash Dashboard Agent icon
Basedash Dashboard Agent

Builds entire dashboards from a single prompt

0
ElevenMusic icon
ElevenMusic

AI-assisted music creation with built-in discovery, royalty

0
Tinfoil icon
Tinfoil

AI chat and API that keeps your conversations fully private

0
06

TECHMEME

06.00
TECHMEME

Techmeme - May 4, 2026

Techmeme Digest: Major tech headlines and industry conversations.

eBay's stock jumps 6%+ to ~$111, below GameStop's $125/share acquisition offer, in a sign investors see hurdles to completing a deal; GameStop's stock drops ~4% (Bloomberg)
Source: TechmemePublished: May 4, 2026

Bloomberg : eBay's stock jumps 6%+ to ~$111, below GameStop's $125/share acquisition offer, in a sign investors see hurdles to completing a deal; GameStop's stock drops ~4% —  Meme Stock GameStop Pitches $56 Billion eBay Takeover  —  Video Player is loading.  —  Unmute  —  Current Time 0:00 Loaded: 36.28% Playback Rate

Filing: Elon Musk texted Greg Brockman about settling days before trial; after being denied, he said Brockman and Altman "will be the most hated men in America" (Ashley Capoot/CNBC)
Source: TechmemePublished: May 4, 2026

Ashley Capoot / CNBC : Filing: Elon Musk texted Greg Brockman about settling days before trial; after being denied, he said Brockman and Altman “will be the most hated men in America” —  Two days before Elon Musk's multi-billion-dollar lawsuit against OpenAI was slated to head to trial …

Source: OpenAI has raised over $4B at a $10B pre-money valuation for The Deployment Company, a new joint venture to help businesses adopt OpenAI tools (Seth Fiegerman/Bloomberg)
Source: TechmemePublished: May 4, 2026

Seth Fiegerman / Bloomberg : Source: OpenAI has raised over $4B at a $10B pre-money valuation for The Deployment Company, a new joint venture to help businesses adopt OpenAI tools —  OpenAI has raised more than $4 billion for a new joint venture that will focus on helping businesses adopt its artificial intelligence software …

Enzo Health, whose AI tools help home health and hospice agencies automate tasks like patient intake and documentation review, raised a $20M Series A led by N47 (Brock E.W. Turner/Axios)
Source: TechmemePublished: May 4, 2026

Brock E.W. Turner / Axios : Enzo Health, whose AI tools help home health and hospice agencies automate tasks like patient intake and documentation review, raised a $20M Series A led by N47 —  Enzo Health, an AI workflow company for post-acute care, raised a $20 million Series A led by N47, CEO Zach Newman tells Axios Pro exclusively.

How AI tools such as SuperBrain and Naver's Talking Buddy are helping South Korea's elderly ease loneliness, detect emergencies, and slow cognitive decline (Choe Sang-Hun/New York Times)
Source: TechmemePublished: May 4, 2026

Choe Sang-Hun / New York Times : How AI tools such as SuperBrain and Naver's Talking Buddy are helping South Korea's elderly ease loneliness, detect emergencies, and slow cognitive decline —  In the world's fastest aging society, artificial intelligence is being used to make care calls to older adults who live alone and to fight dementia.

Amazon debuts Supply Chain Services, which lets companies use its logistics network to move, store, and deliver everything from raw materials to final products (Deborah Sophia/Reuters)
Source: TechmemePublished: May 4, 2026

Deborah Sophia / Reuters : Amazon debuts Supply Chain Services, which lets companies use its logistics network to move, store, and deliver everything from raw materials to final products —  Amazon.com (AMZN.O) said on Monday it was rolling out “Amazon Supply Chain Services”, opening up its logistics network for other businesses to use.

Instructure reported a data breach on April 30; ShinyHunters added Instructure to its victims list and claims it has 3.65TB of data from ~9K institutions (Ionut Arghire/SecurityWeek)
Source: TechmemePublished: May 4, 2026

Ionut Arghire / SecurityWeek : Instructure reported a data breach on April 30; ShinyHunters added Instructure to its victims list and claims it has 3.65TB of data from ~9K institutions —  Hackers disrupted services and stole names, email addresses, student ID numbers, and user messages.

Jensen Huang said Nvidia's market share of AI accelerators in China has "now dropped to zero" and that the US' export policy "has already largely backfired" (Anton Shilov/Tom's Hardware)
Source: TechmemePublished: May 4, 2026

Anton Shilov / Tom's Hardware : Jensen Huang said Nvidia's market share of AI accelerators in China has “now dropped to zero” and that the US' export policy “has already largely backfired” —  US export restrictions bite. … Nvidia CEO Jensen Huang said that the company's market share of AI accelerators in China has now dropped to 0%.

Cerebras seeks a valuation of up to $26.62B in its US IPO, aiming to raise $3.5B by selling 28M shares at $115 to $125 apiece in its second attempt to go public (Reuters)
Source: TechmemePublished: May 4, 2026

Reuters : Cerebras seeks a valuation of up to $26.62B in its US IPO, aiming to raise $3.5B by selling 28M shares at $115 to $125 apiece in its second attempt to go public —  Nvidia-rival Cerebras is seeking a valuation of as much as $26.62 billion in its U.S. initial public offering …

Legislators and experts criticize the EU's €20B sovereign compute data center plan, questioning whether there is demand and the plan's reliance on Nvidia GPUs (Pieter Haeck/Politico)
Source: TechmemePublished: May 4, 2026

Pieter Haeck / Politico : Legislators and experts criticize the EU's €20B sovereign compute data center plan, questioning whether there is demand and the plan's reliance on Nvidia GPUs —  BRUSSELS — A €20 billion European Union plan to build massive artificial intelligence computing hubs is drawing widespread criticism ahead of launch.

An analysis of 1.6M Polymarket accounts since November 2022: 0.1% of users get 67% of the profits, with the highest-frequency traders seeing the most success (Wall Street Journal)
Source: TechmemePublished: May 4, 2026

Wall Street Journal : An analysis of 1.6M Polymarket accounts since November 2022: 0.1% of users get 67% of the profits, with the highest-frequency traders seeing the most success —  A WSJ analysis shows a small number of accounts on Polymarket and Kalshi—often pros using data-driven algorithmic trading—take home most of the winnings

A profile of William Savitt, Sam Altman's lead lawyer against Elon Musk, who represented Twitter against Musk in 2022 and helped OpenAI's for-profit transition (Jacob Shamsian/Business Insider)
Source: TechmemePublished: May 4, 2026

Jacob Shamsian / Business Insider : A profile of William Savitt, Sam Altman's lead lawyer against Elon Musk, who represented Twitter against Musk in 2022 and helped OpenAI's for-profit transition —  - William Savitt is representing Sam Altman in Elon Musk's lawsuit trying to dismantle OpenAI.

Chinese state media: AI-generated Chinese microdramas to be worth $3B+ in 2026, out of a $14B+ total microdrama market, boosted by tools like Seedance 2.0 (New York Times)
Source: TechmemePublished: May 4, 2026

New York Times : Chinese state media: AI-generated Chinese microdramas to be worth $3B+ in 2026, out of a $14B+ total microdrama market, boosted by tools like Seedance 2.0 —  This actor was just hitting his stride when A.I.-generated dramas took off and roles disappeared.  —  This director is thrilled …

Sources: some lenders are exploring private deals to sell their data center debt, and some banks are seeking to offload their Oracle-linked loans at a discount (Financial Times)
Source: TechmemePublished: May 4, 2026

Financial Times : Sources: some lenders are exploring private deals to sell their data center debt, and some banks are seeking to offload their Oracle-linked loans at a discount —  Global lenders explore private deals and risk transfers to cut exposure to AI boom  —  Banks are hunting for new ways …

Sources: SoftBank plans to make lithium- and cobalt-free data center batteries in Japan as soon as FY 2027, as Japan tries to cut its reliance on Chinese metals (Natsuki Yamamoto/Nikkei Asia)
Source: TechmemePublished: May 4, 2026

Natsuki Yamamoto / Nikkei Asia : Sources: SoftBank plans to make lithium- and cobalt-free data center batteries in Japan as soon as FY 2027, as Japan tries to cut its reliance on Chinese metals —  TOKYO — Japan's SoftBank Group will seek to make batteries that do not require expensive lithium and cobalt in Japan …

07

STARTUP ARCHIVE

07.00
STARTUP ARCHIVE

Startup News - May 4, 2026

Startup News Roundup: Aggregating key funding and launch updates.

Marc Andreessen on the 5 personality traits of an innovator
Source: StartupPublished: Mar 31, 2026

“When you’re talking about real innovators—people who actually do really creative, breakthrough work—I think you’re talking about a couple things:”

Steve Jobs explains the importance of both thinking and doing
Source: StartupPublished: Mar 30, 2026

“The doers are the major thinkers. The people who really create the things that change this industry are both the thinker-doer in one person.”

Tobi Lutke explains what the VCs who passed on Shopify got wrong
Source: StartupPublished: Mar 27, 2026

“What a lot of free-market thinkers don’t understand is that between the demand and eventual supply lies friction."

Sam Altman explains how he decides to invest in a startup after 10 minutes
Source: StartupPublished: Mar 26, 2026

"Does this person have the potential to be the next Mark Zuckerberg?… [You don’t get to] 100% accuracy, obviously, but it’s good enough that our business model works.”

Jony Ive recounts the time Steve Jobs called him vain
Source: StartupPublished: Mar 25, 2026

In the clip below, Jony Ive recounts the time he asked Steve Jobs to be less harsh in his critique of a piece of work.

Jeff Bezos’s two pieces of advice for aspiring entrepreneurs
Source: StartupPublished: Mar 24, 2026

“The advice that I would give entrepreneurs is don't chase the hot new thing. It's so hard to catch something that everybody already knows is hot."

Elad Gil: “Things that work tend to work pretty fast”
Source: StartupPublished: Mar 23, 2026

“I do think there’s a bit of a myth in Silicon Valley that you should keep grinding no matter what and it’s just about perseverance, and I think that’s really bad advice."

Paul Graham on why starting with a “small, intense fire" is the key to startup growth
Source: StartupPublished: Mar 20, 2026

"You have to know who those first users are and how you're going to get them."

Keith Rabois on how to identify great talent
Source: StartupPublished: Mar 19, 2026

“What you want to do with every single employee every single day is expand the scope of their responsibilities until it breaks… and that’s the role they should stay in.”

Wealthfront CEO on why advertising spend makes it harder to find product/market fit
Source: StartupPublished: Mar 18, 2026

“The way that you know you have product/market fit is if you have exponential organic growth."

Eric Schmidt on why most companies get strategy wrong
Source: StartupPublished: Mar 17, 2026

“Work very, very hard to figure out what the world’s going to look like in five years. What will people be doing? What will your customers want? Where will costs be?"

Mark Zuckerberg: “You can’t 80/20 everything”
Source: StartupPublished: Mar 16, 2026

"There’s the famous 80/20 rule where you get 80% of the benefit by doing 20% of the work, but you can’t just 80/20 everything. There have to be certain things that you are just the best at."

Marc Andreessen on Mark Zuckerberg’s founder “superpower”
Source: StartupPublished: Mar 13, 2026

“A great superpower that Mark Zuckerberg has that is probably not well-understood enough is he does not get emotionally upset in stressful situations"

Sam Altman explains how to come up with a great startup idea
Source: StartupPublished: Mar 12, 2026

"If you start a startup without a good idea… you’ll be under pressure to make something up and it won’t work that well."

Jeff Bezos on the problems with proxies and managing to metrics
Source: StartupPublished: Mar 11, 2026

“One of the things that happens in business is that you develop certain things that you’re managing to—a typical case would be a metric. And that metric isn’t the real underlying thing.”

Airbnb founder Brian Chesky on how to design an amazing user experience
Source: StartupPublished: Mar 10, 2026

“If you can design something really amazing using the hand-crafted part of your brain, then you can reverse-engineer how to industrialize this millions of times over."

Spencer Rascoff: "I will never invest in a consumer startup with paid marketing”
Source: StartupPublished: Mar 9, 2026

"If you’re actually trying to grow a product, the best levers for doing that are often within the product itself.”

Patrick Collison explains why it sometimes make sense to quit
Source: StartupPublished: Mar 6, 2026

“One thing I’ve learned myself the hard way, is that it is easier to tear down a company and restart it in Silicon Valley, than it is to constantly try to pivot or keep something alive."

Jeff Bezos recounts the time he called Amazon’s customer service number mid-meeting to prove a metric was wrong
Source: StartupPublished: Mar 5, 2026

“I have a saying, which is when the data and the anecdotes disagree, the anecdotes are usually right"

Ben Horowitz: “Nobody was born a great manager. It’s a very unnatural job.”
Source: StartupPublished: Mar 4, 2026

“If you can’t build a great product, it doesn’t matter if you can build a great company.”

03

ALSO TODAY

3 MORE SOURCES
08

SOLIDOT

08.00
SOLIDOT

Solidot News - May 4, 2026

Solidot Feed: Highlighting essential tech & open-source news.

VS Code 默认在 commit 中插入 Co-Authored-by Copilot

微软的编辑器 VS Code 被发现默认在 commit 中插入了 Co-Authored-by Copilot,不管用户有没有使用其 AI 助手 Copilot。此事再次在用户中引发了大量批评。微软开发者回应称他们将会在下个版本中解决默认启用的问题,称如果用户没有使用 AI 助手那么就不应该说代码是 Copilot 合作编写的。

中国三月绿色技术出口增长七成

因霍尔木兹海峡封锁引发的新一轮能源危机,世界各国正加速向清洁能源转型,最大的绿色技术出口国中国三月的太阳能、电池和电动汽车的出口总额同比增长 70%,其中出口的太阳能装机容量达到 68GW,电池出口额达到 100 亿美元,电动汽车和混合动力汽车出口同比增长 140%。多达 50 个国家从中国进口的太阳能设备都创历史新高。

Steam 用户中使用 Linux 比例占 4.52%

2026 年 3 月 Steam 玩家中使用 Linux 比例达到了史无前例的 5.33%,比前一个月增加了一倍多。根据 Valve 公布的 2026 年 4 月 Steam 硬件和软件调查,Steam 用户中使用 Linux 比例回落到了 4.52%,减少 0.81%,但仍然比去年同期翻了一番。Windows 操作系统的比例提高到 93.47%,OSX 占 2.01%。有众多证据表明 Linux 上的游戏表现有了翻天覆地的变化,而 Linux 下游戏的一大特性是需要的资源比 Windows 更少,在今天内存价格飙升的时期显得更有吸引力。其它数据显示:简体中文用户比例占 23.41%,英语用户占 36.77% 。用户使用英特尔 CPU 的比例占 55.81%,AMD 占 44.18%,几乎和前一个月相同。

英国 NHS 以 AI 为由准备关闭所有开源库

日程安排平台 Cal.com 上月宣布从开源转为闭源,理由是 AI 工具更容易从开源代码中发现漏洞,而安全性依赖于模糊,因此闭源有助于提高安全。现在英国国家医疗服务体系(NHS)以相同的理由准备关闭它几乎所有的开源库,这一决定引发了广泛争议和批评。批评者指出 NHS 公布的大部分开源库是数据集、内部工具、指南、研究工具、前端设计等,它们不会因为安全扫描技术的进步而受到影响。此外是否开源对于 Anthropic Mythos 之类的 AI 工具并无区别,因为它们也能分析二进制程序并寻找漏洞。批评者发表了公开信,呼吁 NHS 保持其代码公开。

杭州法院裁定以 AI 代替人类为由裁员是系违法

杭州市中级人民法院公布了一起有关“AI 接替人类员工”的判例,判决公司因“AI 成本比人工低”而辞退员工系违法行为,涉事企业需要支付赔偿金 26 万元人民币。在本案中,现年 35 岁的小周(化名) 2022 年入职杭州某家科技公司担任 AI 大模型“质检员”,负责对 AI 大模型与用户交互形成的答案进行正确性判定。2025 年,该公司以“AI 大模型技术升级,原来需要人工完成的质检工作,现在 AI 自己就能做了”为由,试图对小周进行调岗降薪:从主管降为普通员工、月薪从 2.5 万元人民币降到 1.5 万元。小周拒绝如此安排,随后就被公司解除劳动合同。小周申请劳动仲裁,仲裁庭判定公司应当支付违法解除劳动合同赔偿金 26 万余元。该公司不服,因此诉诸法庭。杭州市中级人民法院审理后认定,该公司解约非因裁撤业务、经营不善、减少亏损等消极因素,而是以 AI 的成本优势为由,不属于劳动合同无法履行的“客观情况重大变化”。而且该公司之前为小周提供的调岗降薪方案,实际上导致待遇大幅下降,并非合理协商方案。因此法庭认定该公司构成违法解除,支持仲裁结果,判决其按 2N 标准支付小周赔偿金。杭州市中级人民法院民事第五庭庭长丁晔对媒体表示,在企业视角下,应用 AI 提效降本是市场竞争的必然选择;而在劳动者视角下,因技术变革而失去岗位或被降薪,实质是公司将正常的技术迭代风险转嫁给劳动者。

人可以在睡梦中交流和学习

很多人都有过在睡梦中获得灵感的经历。这种现象促使科学家研究睡眠学习。在 1954 年 Charles W. Simon 和 William H. Emmons 认为大多数睡眠学习研究的参与者其实都是清醒的,因此此类研究都毫无意义。他们将睡眠研究归类为科幻和伪科学,之后几十年很少有人再对此展开研究。但最近几年,科学家再次尝试展开研究。新研究主要针对清醒梦者,即在睡眠中保持意识清醒,并意识到自己正在做梦的人。根据发表在《Neuroscience of Consciousness》期刊上的一项研究,20 名清醒梦者在实验室里尝试睡梦中解谜。每个谜题都与特定的声音配对,旨在促使他们恢复处理相应的谜题。在实验室里,参与者解开了梦中出现的谜题的 42%,对于没有出现在梦中的谜题,他们只解开了 17%。大多数人不会做清醒梦,所以研究对象并不具有代表性。研究人员认为一种解释是:我们在睡着时,更可能将不相关的刺激联系起来。研究人员并不建议为了睡眠中学习而干扰睡眠,因为睡眠是重要的生理过程,干扰这一过程可能得不偿失。

Ask.com 关闭

有 30 年历史的搜索引擎 Ask.com 于 2026 年 5 月 1 日关闭。Ask.com 创办于 1996 年 6 月,最初的名字叫 AskJeeves.com,2006 年弃用了名字 Jeeves 变成了 Ask.com,成为了搜索引擎,有自己的爬虫和算法。2010 年面对大型搜索引擎的竞争它将网络搜索技术外包,恢复了问答网站的功能。Ask.com 虽然关闭了,但 AskJeeves.com 仍继续运营。Jeeves 的意思是贴身侍从,名字来自于英国作家 P. G. Wodehouse 作品《Jeeves》系列,Jeeves 是绅士 Bertie Wooster 的贴身男仆。

为什么 OpenAI 的系统提示词要专门限制 Goblins

OpenAI Codex CLI 系统提示词专门加入了一条对地精(Goblins)等词的限制:“never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query”。官方解释称,从 GPT-5.1 开始该公司的模型在比喻中提及 goblin 等词的频率大增,ChatGPT 中 goblin 的使用量增加了 175%,gremlin 使用量增加了 52%。它为此展开了调查,发现是因为 Nerdy 个性无意中奖励了此类比喻,导致高频使用 goblin 的行为扩散。为解决该问题,OpenAI 淘汰了 Nerdy 个性,移除了对 goblin 友好的奖励信号,从训练数据过滤掉相关示例,防止其再次不恰当的出现。

瑞士将于六月公投是否将人口限制在一千万

瑞士将于 6 月 14 日举行全民公投,决定是否在 2050 年前将全国常住人口限制在一千万以内。瑞士的人口出生率为每名妇女生育 1.29 个孩子,远低于 2.1 的人口替代率,它的人口增长主要归因于外来移民。目前瑞士人口已超过 900 万,官方数据显示,2024 年外国公民占到了瑞士总人口的 27% 以上。右翼的瑞士人民党(Swiss People's Party)支持的提案要求“2050 年前瑞士常住人口不得超过 1000 万,且瑞士应放弃与欧盟的自由流动协议”。对瑞士 16176 名受访者的最新民调显示,52%的人支持或倾向于支持该提案,46% 的人反对,其余未表明立场。

内核曝出 Root 提权漏洞 Copy Fail

Xint Code 团队报告了被称为 Copy Fail 的内核 root 提权漏洞。该漏洞非常容易利用,影响 2017 年以来的几乎所有内核版本。在漏洞披露前内核安全团队没有提前通知发行版也引发了争议。内核不将损坏的页面标记为可写回,因此磁盘上的文件内容不变,但内存中的页面缓存已被篡改。访问文件时,系统读取的是页面缓存,因此损坏的数据会立即影响整个系统。本地非特权用户可通过损坏 setuid 二进制文件的页面缓存获取 root 权限。由于页面缓存在主机和容器之间共享,攻击者可以跨容器边界利用此漏洞。该漏洞影响几乎所有发行版,主要发行版都已经释出或准备释出补丁。

Mozilla 反对 Chrome 的 Prompt API

Google Chrome 在 2025 年提出了 Prompt API,也就是为浏览器集成的本地模型——使用前需要下载——提供统一的 JavaScript API。Google 还有意让该 API 成为一个 W3C 标准。Chrome 桌面版集成的大模型是 Gemini Nano,使用该模型需要本地设备至少有 4GB 显存、16GB 内存和至少 22GB 可用空间(浏览器所在硬盘)。Mozilla 开发者发表声明反对 Chrome 的 Prompt API。开发者认为该 API 存在巨大的互操作性问题,因为不同的模型都有各种独特的特性,因此系统提示词需要对模型进行针对性调整,然而对一个模型进行的调整对另一个模型就可能是过度修正。为了实现互操作性,Mozilla 和 Apple 可能不得不获得 Google 模型的授权,或者发布一个与 Google 模型特性兼容的模型。另一个大问题是模型的中立性缺乏。

09

APP STORE RANK

09.00
APP STORE RANK
FETCHING · APP STORE RANK