Weekly Digest — 2026-W12
217 unique stories (2026-03-16 → 2026-03-22), aggregated across 8 sources.
Hacker News(42)
- Leanstral: Open-Source foundation for trustworthy vibe-coding (mistral.ai)
- Palestinian boy, 12, describes how Israeli forces killed his family in car (www.bbc.com)
- Meta’s renewed commitment to jemalloc (engineering.fb.com)
- AirPods Max 2 (www.apple.com)
- The “small web” is bigger than you might think (kevinboone.me)
- US Job Market Visualizer (karpathy.ai)
- Python 3.15's JIT is now back on track (fidget-spinner.github.io)
- A Decade of Slug (terathon.com)
- If you thought code writing speed was your problem you have bigger problems (andrewmurphy.io)
- Illinois Introducing Operating System Account Age Bill (www.ilga.gov)
- GPT‑5.4 Mini and Nano (openai.com)
- Microsoft's 'unhackable' Xbox One has been hacked by 'Bliss' (www.tomshardware.com)
GitHub Trending(24)
Product Hunt(42)
- GLM-5-Turbo
High-speed agentic model built specifically for OpenClaw
- Masko Code
A mascot that watches Claude Code for you
- MuleRun
Raise an AI that actually learns how you work
- Donely
Your own OpenClaw instance for $0/mo + free AI usage offer
- JetBrains Air
Run Codex, Claude Agents, Gemini CLI, and Junie side by side
- Faces
Interactive presentations that use the full power of the web
- DLSS 5
The GPT moment for real-time computer graphics
- Sokosumi
Marketing agents that research, plan, and manage for you
- Easy App Reports
Bring your app's data to Looker Studio, BigQuery, or AI
- Kipps.AI Campaign
Lead Qualification, Bulk Outreach and Anniversary’s Reminder
- Angy
Multi‑agent pipelines w/ AI‑driven scheduling + safety check
- Ocean Orchestrator
Run AI jobs from your IDE with a one-click workflow
Hugging Face(31)
- LMEB: Long-horizon Memory Embedding Benchmark
Memory embeddings are crucial for memory-augmented systems, such as OpenClaw, but their evaluation is underexplored in current text embedding benchmarks, which narrowly focus on traditional passage retrieval and fail to assess models' ability to handle long-horizon memory retrieval tasks involving fragmented, context-dependent, and temporally distant information. To address this, we introduce the Long-horizon Memory Embedding Benchmark (LMEB), a comprehensive framework that evaluates embedding models' capabilities in handling complex, long-horizon memory retrieval tasks. LMEB spans 22 datasets and 193 zero-shot retrieval tasks across 4 memory types: episodic, dialogue, semantic, and procedural, with both AI-generated and human-annotated data. These memory types differ in terms of level of abstraction and temporal dependency, capturing distinct aspects of memory retrieval that reflect the diverse challenges of the real world. We evaluate 15 widely used embedding models, ranging from hundreds of millions to ten billion parameters. The results reveal that (1) LMEB provides a reasonable level of difficulty; (2) Larger models do not always perform better; (3) LMEB and MTEB exhibit orthogonality. This suggests that the field has yet to converge on a universal model capable of excelling across all memory retrieval tasks, and that performance in traditional passage retrieval may not generalize to long-horizon memory retrieval. In summary, by providing a standardized and reproducible evaluation framework, LMEB fills a crucial gap in memory embedding evaluation, driving further advancements in text embedding for handling long-term, context-dependent memory retrieval. LMEB is available at https://github.com/KaLM-Embedding/LMEB.
- Cheers: Decoupling Patch Details from Semantic Representations Enables Unified Multimodal Comprehension and Generation
A recent cutting-edge topic in multimodal modeling is to unify visual comprehension and generation within a single model. However, the two tasks demand mismatched decoding regimes and visual representations, making it non-trivial to jointly optimize within a shared feature space. In this work, we present Cheers, a unified multimodal model that decouples patch-level details from semantic representations, thereby stabilizing semantics for multimodal understanding and improving fidelity for image generation via gated detail residuals. Cheers includes three key components: (i) a unified vision tokenizer that encodes and compresses image latent states into semantic tokens for efficient LLM conditioning, (ii) an LLM-based Transformer that unifies autoregressive decoding for text generation and diffusion decoding for image generation, and (iii) a cascaded flow matching head that decodes visual semantics first and then injects semantically gated detail residuals from the vision tokenizer to refine high-frequency content. Experiments on popular benchmarks demonstrate that Cheers matches or surpasses advanced UMMs in both visual understanding and generation. Cheers also achieves 4x token compression, enabling more efficient high-resolution image encoding and generation. Notably, Cheers outperforms the Tar-1.5B on the popular benchmarks GenEval and MMBench, while requiring only 20% of the training cost, indicating effective and efficient (i.e., 4x token compression) unified multimodal modeling. We will release all code and data for future research.
- Can Vision-Language Models Solve the Shell Game?
Visual entity tracking is an innate cognitive ability in humans, yet it remains a critical bottleneck for Vision-Language Models (VLMs). This deficit is often obscured in existing video benchmarks by visual shortcuts. We introduce VET-Bench, a synthetic diagnostic testbed featuring visually identical objects that necessitate tracking exclusively through spatiotemporal continuity. Our experiments reveal that current state-of-the-art VLMs perform at or near chance level on VET-Bench, exposing a fundamental limitation: an over-reliance on static frame-level features and a failure to maintain entity representations over time. We provide a theoretical analysis drawing connections to the state-tracking problem, proving that fixed-depth transformer-based VLMs are fundamentally limited in tracking indistinguishable objects without intermediate supervision due to expressivity constraints. To address this, we propose Spatiotemporal Grounded Chain-of-Thought (SGCoT): generating object trajectories as explicit intermediate states. Leveraging Molmo2's object tracking ability, we elicit SGCoT reasoning by fine-tuning on synthesized text-only data for alignment. Our method achieves state-of-the-art accuracy exceeding 90% on VET-Bench, demonstrating that VLMs can reliably solve the video shell-game task end-to-end without external tools. Our code and data are available at https://vetbench.github.io .
- daVinci-Env: Open SWE Environment Synthesis at Scale
Training capable software engineering (SWE) agents demands large-scale, executable, and verifiable environments that provide dynamic feedback loops for iterative code editing, test execution, and solution refinement. However, existing open-source datasets remain limited in scale and repository diversity, while industrial solutions are opaque with unreleased infrastructure, creating a prohibitive barrier for most academic research groups. We present OpenSWE, the largest fully transparent framework for SWE agent training in Python, comprising 45,320 executable Docker environments spanning over 12.8k repositories, with all Dockerfiles, evaluation scripts, and infrastructure fully open-sourced for reproducibility. OpenSWE is built through a multi-agent synthesis pipeline deployed across a 64-node distributed cluster, automating repository exploration, Dockerfile construction, evaluation script generation, and iterative test analysis. Beyond scale, we propose a quality-centric filtering pipeline that characterizes the inherent difficulty of each environment, filtering out instances that are either unsolvable or insufficiently challenging and retaining only those that maximize learning efficiency. With 891K spent on environment construction and an additional 576K on trajectory sampling and difficulty-aware curation, the entire project represents a total investment of approximately $1.47 million, yielding about 13,000 curated trajectories from roughly 9,000 quality guaranteed environments. Extensive experiments validate OpenSWE's effectiveness: OpenSWE-32B and OpenSWE-72B achieve 62.4% and 66.0% on SWE-bench Verified, establishing SOTA among Qwen2.5 series. Moreover, SWE-focused training yields substantial out-of-domain improvements, including up to 12 points on mathematical reasoning and 5 points on science benchmarks, without degrading factual recall.
- OmniForcing: Unleashing Real-time Joint Audio-Visual Generation
Recent joint audio-visual diffusion models achieve remarkable generation quality but suffer from high latency due to their bidirectional attention dependencies, hindering real-time applications. We propose OmniForcing, the first framework to distill an offline, dual-stream bidirectional diffusion model into a high-fidelity streaming autoregressive generator. However, naively applying causal distillation to such dual-stream architectures triggers severe training instability, due to the extreme temporal asymmetry between modalities and the resulting token sparsity. We address the inherent information density gap by introducing an Asymmetric Block-Causal Alignment with a zero-truncation Global Prefix that prevents multi-modal synchronization drift. The gradient explosion caused by extreme audio token sparsity during the causal shift is further resolved through an Audio Sink Token mechanism equipped with an Identity RoPE constraint. Finally, a Joint Self-Forcing Distillation paradigm enables the model to dynamically self-correct cumulative cross-modal errors from exposure bias during long rollouts. Empowered by a modality-independent rolling KV-cache inference scheme, OmniForcing achieves state-of-the-art streaming generation at sim25 FPS on a single GPU, maintaining multi-modal synchronization and visual quality on par with the bidirectional teacher.Project Page: https://omniforcing.com{https://omniforcing.com}
- Multimodal OCR: Parse Anything from Documents
We present Multimodal OCR (MOCR), a document parsing paradigm that jointly parses text and graphics into unified textual representations. Unlike conventional OCR systems that focus on text recognition and leave graphical regions as cropped pixels, our method, termed dots.mocr, treats visual elements such as charts, diagrams, tables, and icons as first-class parsing targets, enabling systems to parse documents while preserving semantic relationships across elements. It offers several advantages: (1) it reconstructs both text and graphics as structured outputs, enabling more faithful document reconstruction; (2) it supports end-to-end training over heterogeneous document elements, allowing models to exploit semantic relations between textual and visual components; and (3) it converts previously discarded graphics into reusable code-level supervision, unlocking multimodal supervision embedded in existing documents. To make this paradigm practical at scale, we build a comprehensive data engine from PDFs, rendered webpages, and native SVG assets, and train a compact 3B-parameter model through staged pretraining and supervised fine-tuning. We evaluate dots.mocr from two perspectives: document parsing and structured graphics parsing. On document parsing benchmarks, it ranks second only to Gemini 3 Pro on our OCR Arena Elo leaderboard, surpasses existing open-source document parsing systems, and sets a new state of the art of 83.9 on olmOCR Bench. On structured graphics parsing, dots.mocr achieves higher reconstruction quality than Gemini 3 Pro across image-to-SVG benchmarks, demonstrating strong performance on charts, UI layouts, scientific figures, and chemical diagrams. These results show a scalable path toward building large-scale image-to-code corpora for multimodal pretraining. Code and models are publicly available at https://github.com/rednote-hilab/dots.mocr.
- AI Can Learn Scientific Taste
Great scientists have strong judgement and foresight, closely tied to what we call scientific taste. Here, we use the term to refer to the capacity to judge and propose research ideas with high potential impact. However, most relative research focuses on improving an AI scientist's executive capability, while enhancing an AI's scientific taste remains underexplored. In this work, we propose Reinforcement Learning from Community Feedback (RLCF), a training paradigm that uses large-scale community signals as supervision, and formulate scientific taste learning as a preference modeling and alignment problem. For preference modeling, we train Scientific Judge on 700K field- and time-matched pairs of high- vs. low-citation papers to judge ideas. For preference alignment, using Scientific Judge as a reward model, we train a policy model, Scientific Thinker, to propose research ideas with high potential impact. Experiments show Scientific Judge outperforms SOTA LLMs (e.g., GPT-5.2, Gemini 3 Pro) and generalizes to future-year test, unseen fields, and peer-review preference. Furthermore, Scientific Thinker proposes research ideas with higher potential impact than baselines. Our findings show that AI can learn scientific taste, marking a key step toward reaching human-level AI scientists.
- OpenSeeker: Democratizing Frontier Search Agents by Fully Open-Sourcing Training Data
Deep search capabilities have become an indispensable competency for frontier Large Language Model (LLM) agents, yet the development of high-performance search agents remains dominated by industrial giants due to a lack of transparent, high-quality training data. This persistent data scarcity has fundamentally hindered the progress of the broader research community in developing and innovating within this domain. To bridge this gap, we introduce OpenSeeker, the first fully open-source search agent (i.e., model and data) that achieves frontier-level performance through two core technical innovations: (1) Fact-grounded scalable controllable QA synthesis, which reverse-engineers the web graph via topological expansion and entity obfuscation to generate complex, multi-hop reasoning tasks with controllable coverage and complexity. (2) Denoised trajectory synthesis, which employs a retrospective summarization mechanism to denoise the trajectory, therefore promoting the teacher LLMs to generate high-quality actions. Experimental results demonstrate that OpenSeeker, trained (a single training run) on only 11.7k synthesized samples, achieves state-of-the-art performance across multiple benchmarks including BrowseComp, BrowseComp-ZH, xbench-DeepSearch, and WideSearch. Notably, trained with simple SFT, OpenSeeker significantly outperforms the second-best fully open-source agent DeepDive (e.g., 29.5% v.s. 15.3% on BrowseComp), and even surpasses industrial competitors such as Tongyi DeepResearch (trained via extensive continual pre-training, SFT, and RL) on BrowseComp-ZH (48.4% v.s. 46.7%). We fully open-source the complete training dataset and the model weights to democratize frontier search agent research and foster a more transparent, collaborative ecosystem.
- EnterpriseOps-Gym: Environments and Evaluations for Stateful Agentic Planning and Tool Use in Enterprise Settings
Large language models are shifting from passive information providers to active agents intended for complex workflows. However, their deployment as reliable AI workers in enterprise is stalled by benchmarks that fail to capture the intricacies of professional environments, specifically, the need for long-horizon planning amidst persistent state changes and strict access protocols. In this work, we introduce EnterpriseOps-Gym, a benchmark designed to evaluate agentic planning in realistic enterprise settings. Specifically, EnterpriseOps-Gym features a containerized sandbox with 164 database tables and 512 functional tools to mimic real-world search friction. Within this environment, agents are evaluated on 1,150 expert-curated tasks across eight mission-critical verticals (including Customer Service, HR, and IT). Our evaluation of 14 frontier models reveals critical limitations in state-of-the-art models: the top-performing Claude Opus 4.5 achieves only 37.4% success. Further analysis shows that providing oracle human plans improves performance by 14-35 percentage points, pinpointing strategic reasoning as the primary bottleneck. Additionally, agents frequently fail to refuse infeasible tasks (best model achieves 53.9%), leading to unintended and potentially harmful side effects. Our findings underscore that current agents are not yet ready for autonomous enterprise deployment. More broadly, EnterpriseOps-Gym provides a concrete testbed to advance the robustness of agentic planning in professional workflows.
- Grounding World Simulation Models in a Real-World Metropolis
What if a world simulation model could render not an imagined environment but a city that actually exists? Prior generative world models synthesize visually plausible yet artificial environments by imagining all content. We present Seoul World Model (SWM), a city-scale world model grounded in the real city of Seoul. SWM anchors autoregressive video generation through retrieval-augmented conditioning on nearby street-view images. However, this design introduces several challenges, including temporal misalignment between retrieved references and the dynamic target scene, limited trajectory diversity and data sparsity from vehicle-mounted captures at sparse intervals. We address these challenges through cross-temporal pairing, a large-scale synthetic dataset enabling diverse camera trajectories, and a view interpolation pipeline that synthesizes coherent training videos from sparse street-view images. We further introduce a Virtual Lookahead Sink to stabilize long-horizon generation by continuously re-grounding each chunk to a retrieved image at a future location. We evaluate SWM against recent video world models across three cities: Seoul, Busan, and Ann Arbor. SWM outperforms existing methods in generating spatially faithful, temporally consistent, long-horizon videos grounded in actual urban environments over trajectories reaching hundreds of meters, while supporting diverse camera movements and text-prompted scenario variations.
- HSImul3R: Physics-in-the-Loop Reconstruction of Simulation-Ready Human-Scene Interactions
We present HSImul3R, a unified framework for simulation-ready 3D reconstruction of human-scene interactions (HSI) from casual captures, including sparse-view images and monocular videos. Existing methods suffer from a perception-simulation gap: visually plausible reconstructions often violate physical constraints, leading to instability in physics engines and failure in embodied AI applications. To bridge this gap, we introduce a physically-grounded bi-directional optimization pipeline that treats the physics simulator as an active supervisor to jointly refine human dynamics and scene geometry. In the forward direction, we employ Scene-targeted Reinforcement Learning to optimize human motion under dual supervision of motion fidelity and contact stability. In the reverse direction, we propose Direct Simulation Reward Optimization, which leverages simulation feedback on gravitational stability and interaction success to refine scene geometry. We further present HSIBench, a new benchmark with diverse objects and interaction scenarios. Extensive experiments demonstrate that HSImul3R produces the first stable, simulation-ready HSI reconstructions and can be directly deployed to real-world humanoid robots.
- Attention Residuals
Residual connections with PreNorm are standard in modern LLMs, yet they accumulate all layer outputs with fixed unit weights. This uniform aggregation causes uncontrolled hidden-state growth with depth, progressively diluting each layer's contribution. We propose Attention Residuals (AttnRes), which replaces this fixed accumulation with softmax attention over preceding layer outputs, allowing each layer to selectively aggregate earlier representations with learned, input-dependent weights. To address the memory and communication overhead of attending over all preceding layer outputs for large-scale model training, we introduce Block AttnRes, which partitions layers into blocks and attends over block-level representations, reducing the memory footprint while preserving most of the gains of full AttnRes. Combined with cache-based pipeline communication and a two-phase computation strategy, Block AttnRes becomes a practical drop-in replacement for standard residual connections with minimal overhead. Scaling law experiments confirm that the improvement is consistent across model sizes, and ablations validate the benefit of content-dependent depth-wise selection. We further integrate AttnRes into the Kimi Linear architecture (48B total / 3B activated parameters) and pre-train on 1.4T tokens, where AttnRes mitigates PreNorm dilution, yielding more uniform output magnitudes and gradient distribution across depth, and improves downstream performance across all evaluated tasks.
Techmeme(42)
- Austin-based Ironlight, which is building a regulated marketplace for tokenized securities, raised a $21M Series A; its platform received FINRA approval in 2025 (Ryan Lawler/Axios)
Ryan Lawler / Axios : Austin-based Ironlight, which is building a regulated marketplace for tokenized securities, raised a $21M Series A; its platform received FINRA approval in 2025 — Ironlight, which is building a regulated marketplace for tokenized securities, raised $21 million in Series A funding …
- Manus introduces My Computer, a desktop application that enables its AI agent to interact directly with the user's local files, tools, and applications (Manus)
Manus : Manus introduces My Computer, a desktop application that enables its AI agent to interact directly with the user's local files, tools, and applications — Until today, Manus has lived entirely in the cloud. — The cloud sandbox has served Manus well. Inside an isolated, secure environment …
- Roche says it has deployed 3,500+ Nvidia Blackwell GPUs, which it calls "the greatest announced GPU footprint available to a pharmaceutical company" (Sebastian Moss/DatacenterDynamics)
Sebastian Moss / DatacenterDynamics : Roche says it has deployed 3,500+ Nvidia Blackwell GPUs, which it calls “the greatest announced GPU footprint available to a pharmaceutical company” — Building on previous Genentech-Nvidia partnership — The fifth-largest pharmaceutical company in the world has deployed more than 3,500 Nvidia Blackwell GPUs.
- Nvidia says BYD, Geely, Isuzu, and Nissan will use its Drive Hyperion AV platform, and that Uber will launch Hyperion-powered robotaxis across 28 cities by 2028 (Andrew J. Hawkins/The Verge)
Andrew J. Hawkins / The Verge : Nvidia says BYD, Geely, Isuzu, and Nissan will use its Drive Hyperion AV platform, and that Uber will launch Hyperion-powered robotaxis across 28 cities by 2028 — The chipmaker has been at the center of simmering trade tensions between the US and China.
- Nvidia unveils Space-1 Vera Rubin for orbital data centers, saying its GPU delivers up to 25x more AI compute for space-based inferencing compared to the H100 (Sebastian Moss/DatacenterDynamics)
Sebastian Moss / DatacenterDynamics : Nvidia unveils Space-1 Vera Rubin for orbital data centers, saying its GPU delivers up to 25x more AI compute for space-based inferencing compared to the H100 — Will be used by Aetherflux, Axiom Space, Kepler Comms, Planet, Sophia Space & Starcloud — Nvidia has developed a space-specific module of its Vera Rubin GPU-CPU platform.
- US startup Reflection AI is working with South Korea's Shinsegae Group to build a 250MW data center in South Korea, sources say in a several-billion-dollar deal (Amrith Ramkumar/Wall Street Journal)
Amrith Ramkumar / Wall Street Journal : US startup Reflection AI is working with South Korea's Shinsegae Group to build a 250MW data center in South Korea, sources say in a several-billion-dollar deal — The Trump administration is using AI chips and models as a tool for diplomacy and boosting U.S. allies
- You.com appoints Saahil Jain as new CTO after co-founder Bryan McCann left to Anthropic; You.com, which began in consumer search, is targeting enterprise more (Kevin McLaughlin/The Information)
Kevin McLaughlin / The Information : You.com appoints Saahil Jain as new CTO after co-founder Bryan McCann left to Anthropic; You.com, which began in consumer search, is targeting enterprise more — AI startup You.com, valued at $1.5 billion after a funding round in September, is changing its senior leadership as it focuses on helping businesses adopt AI.
- Mistral announces Mistral Forge to help enterprises build custom models actually trained on their own data, using Mistral open-weight models as a starting point (TechCrunch)
TechCrunch : Mistral announces Mistral Forge to help enterprises build custom models actually trained on their own data, using Mistral open-weight models as a starting point — Most enterprise AI projects fail not because companies lack the technology, but because the models they're using don't understand their business.
- Meta says Quest users will lose access to Meta Horizon Worlds on the headsets on June 15; access will continue on the Meta Horizon mobile app (Riley Griffin/Bloomberg)
Riley Griffin / Bloomberg : Meta says Quest users will lose access to Meta Horizon Worlds on the headsets on June 15; access will continue on the Meta Horizon mobile app — Meta Platforms Inc. said that users of its Quest headsets will lose access to Horizon Worlds, a virtual destination where cartoon versions of people …
- Jensen Huang says Nvidia is in the process of restarting manufacturing of its H200 chips for shipments to China and it has received orders from "many customers" (Ina Fried/Axios)
Ina Fried / Axios : Jensen Huang says Nvidia is in the process of restarting manufacturing of its H200 chips for shipments to China and it has received orders from “many customers” — Nvidia CEO Jensen Huang said the company is in the process of restarting manufacturing of its H200 chips for shipments to China.
- Robinhood Ventures Fund I discloses its first investments, buying $14.6M of Stripe shares and $20M of ElevenLabs' preferred stock in March (CoinDesk)
CoinDesk : Robinhood Ventures Fund I discloses its first investments, buying $14.6M of Stripe shares and $20M of ElevenLabs' preferred stock in March — The closed-end fund aims to give everyday investors exposure to private firms before they go public. — What to know:
- Sources: China is penalizing people tied to Meta's $2B Manus acquisition, including by apparently restricting Manus executives from leaving China for Singapore (New York Times)
New York Times : Sources: China is penalizing people tied to Meta's $2B Manus acquisition, including by apparently restricting Manus executives from leaving China for Singapore — The country appears to be cracking down on people linked to the acquisition of Manus, a Singapore company with Chinese roots, as President Trump prepares to visit Beijing.
Solidot(36)
- GDC 2026 访客减少三成
2026 年的游戏开发者大会(GDC)访客人数比去年下降了三成,总人数大约 2 万。2022 年的 GDC 大会是疫情之后首度回归,采用了线上线下的混合模式举行,其中线下访客接近 1.2 万,总访客 1.7 万。2023 年访客人数回升到 2.8 万,2024 年突破 3 万创下访客人数纪录,2025 年维持这一水平。但今年的 GDC 大会参观人数因为费用以及国际游客对美国国内情况的担忧而减少。
- FSF 希望用户自由是 AI 公司版权诉讼的一个目标
Anthropic 从 Library Genesis 等影子图书馆下载了逾 700 万本书籍,它与图书作者和解了侵权诉讼,正联系相关图书的作者提供经济补偿。被收录在 Anthropic 图书数据库中的一本书是 Sam Williams 著的《Free as in freedom: Richard Stallman's crusade for free software》,该书由 O'Reilly 和 FSF 根据 GNU Free Documentation License (GNU FDL)许可证出版,GNU FDL 是一种自由许可证,无需付费即可用于任意目的。FSF 表示,它对经济补偿兴趣不大,如果其拥有版权的图书被 AI 公司用于训练大模型,那么它更希望获得的补偿是用户自由:AI 公司与用户共享完整的训练输入,完整的模型、训练配置设置和相应的软件源代码。
- 英国涂鸦艺术家 Banksy 是否还应该保持匿名?
路透发表了一篇调查报告,分析了匿名英国涂鸦艺术家 Banksy 的身份。2022 年 11 月,Banksy 在乌克兰基辅的一处被炸村庄墙壁上制作了涂鸦,当地居民看到了涂鸦者。路透记者对此展开了调查,发现了 Banksy 本人多年前亲笔写下的一份认罪书,承认行为不检的轻罪指控,这份文件揭露了他的真实身份。但 Banksy 的律师督促记者不要公开 Banksy 的身份,称会侵犯艺术家的隐私,干扰他艺术创作,危及他的安全,且会损害公众利益。律师称,“匿名或使用笔名进行创作符合重要的社会利益。它保护了言论自由,使创作者能畅所欲言的向权力说出真话,不必担心遭到报复、审查或迫害——尤其是在涉及政治、宗教或社会正义等敏感问题时。”
- 常压超导温度创下新纪录
根据发表在 PNAS 期刊上的一项研究,休斯顿大学物理系及得克萨斯超导中心的研究团队在环境压力下实现了151开尔文(约零下 122 摄氏度)的超导转变温度,刷新了常压下超导温度的世界纪录。超导体通常需要超高压或超低温,常压室温超导体一直是科学家追求的目标。研究团队采用了一种名为“压力淬火”的新工艺。该方法的原理是,先对预选的材料样本施加极高压力,此过程能改变材料的微观结构,从而显著提升其超导转变温度。在维持高压并降温至特定状态后,迅速将压力完全释放。通过这种快速“淬火”,材料在高压下获得的、更利于超导的亚稳态结构得以“锁定”并保留下来,材料在恢复常压后仍能在比原来高得多的温度下保持超导特性。凭借这一方法,团队将超导材料在常压下的转变温度提升至 151 开尔文。
- 逾四成日本人计划工作到 70 岁后
根据日经的调查,回答“到了 70 岁仍会继续工作”的比例为 42%,自 2018 年调查开始以来首次超过 4 成。回答“70~74 岁”仍会继续工作的比例为 23%,回答“75岁以上”的比例为 19%。工作到多大年龄的平均值为 68.3 岁,高于法定的 65 岁。而日本政府在《老年人雇用稳定法》中规定,确保老年人工作到 70 岁的机会是企业的努力义务。
- 波兰核研究机构遭黑客攻击
波兰核研究机构 National Centre for Nuclear Research(NCBJ)披露其 IT 基础设施遭到网络攻击,但表示安全团队迅速采取行动,挫败了攻击,因此未遭受什么影响。NCBJ 从事核物理、反应堆技术、粒子物理和辐射应用方面的研究,运营着用于科学实验、中子研究和医用同位素生产的核反应堆 MARIA。NCBJ 称 MARIA 反应堆未受影响,仍然全负荷运行。NCBJ 未确定攻击者身份。
- 英伟达的 DLSS 5 引发争议和批评
英伟达演示了计划于今年晚些时候推出的深度学习超级采样技术 DLSS 的新版本,结果在玩家中间引发了广泛争议和批评,因为 DLSS 5 在重构图像过程中戏剧性的改变了游戏画面,为游戏画面加入了一层 AI 滤镜,让游戏中的人物变得面目全非。DLSS 5 在社交媒体上引发了玩家制作大量梗图进行嘲讽,在 reddit 上大量相关讨论被删除(可能是英伟达在公关),就像微软被称为 Microslop,玩家现在开始称英伟达为 Slopvidia。
- 太阳可能在几十亿年前从银河中心迁移到外围
最新证据显示,约在 40-60 亿年前,太阳可能曾参与一次大规模的恒星迁移事件。一群与太阳性质非常相似的太阳类恒星(solar twins)一同离开银河系核心区域并向外迁移。天文学家利用 ESA Gaia 卫星的观测数据进行分析,建立了一份前所未有精确的恒星目录。研究显示,太阳目前位于银河系的位置并非偶然,而可能是这次大规模恒星迁移事件的一部分。太阳约在 46 亿年前诞生,而当时太阳的位置比今天更接近银河中心超过一万光年。恒星化学成分的研究支持这一推论,但这个结果长期让科学家感到困惑。观测显示银河中心存在一个巨大的棒状结构,它会形成所谓的共转屏障,使恒星难以从银河中心区域迁移到如此遥远的位置。为了解答这个问题,来自日本的研究团队对银河系中类似太阳的恒星展开了大规模研究。这些恒星在温度、表面重力与化学组成上都与太阳极为相似。研究团队从 Gaia 的资料中挑选出 6594 颗太阳类恒星建立目录。透过这份庞大的资料,研究人员得以重建目前最精确的恒星年龄分布。分析结果显示,在 40-60 亿年前出现一个明显且宽广的年龄峰值,显示一群年龄相近的恒星分布在距离银河中心相似的位置。这代表太阳并非单独迁移,而是属于一次大规模恒星外移事件的一部分。
- 微软允许 Windows 11 用户在安装过程中重命名主文件夹名称
众所周知,Windows 在安装过程中并不允许用户重命名主文件夹名称,而是根据用户账号或邮箱地址自动生成名称。去年微软开始测试允许用户重命名主文件夹名称,但非常繁琐。现在微软终于将重命名主文件夹名称作为安装流程的一部分提供给用户。微软释出了预览版本 Windows 11 Insider Preview Build 26220.8062,在安装流程的“设备名称”页面包含了一个重命名主文件夹名称选项,如果用户跳过这一步骤,那么主文件夹仍然会使用默认名称。命名文件夹名称需要遵循微软的命名规定。
- 德国法庭裁决 TCL 的 QLED 不是真的 QLED
德国的一家法庭裁决 TCL 误导消费者,它的多款宣称“量子点电视(QLED)”的产品并不是真的 QLED,没有提供 QLED 电视应有的色彩还原。法院命令 TCL 停止在德国宣传或销售相关型号的电视机。相关诉讼由韩国公司韩松化学提起,它是 TCL 竞争对手三星的合作伙伴。韩松化学委托进行的测试显示,三款以量子点名义出售的 TV 都未检测出铟和镉,它们是不可或缺的量子点材料。TCL 对测试结果提出异议,称量子点含量因供应商而异,它公布了自己的测试结果。TCL 的测试结果与韩松化学的测试结果相矛盾,但双方采用了不同的测试方法:TCL 的测试侧重于其使用的量子点薄膜,而韩松化学测试的是 TCL 电视机。韩松化学在包括美国在内的多国提起了针对 TCL 的诉讼,另一家中国电视制造商海信也面临类似的诉讼。
- Marknote 1.5 释出
基于 Markdown 的笔记管理应用 Marknote 释出了 v1.5。主要新特性包括:新的 Source Mode 模式,不使用富文本 WYSIWYG 界面直接编辑 Markdown 内容;支持维基风格的笔记文档链接,支持跨笔记查找;简化笔记和笔记本管理,每个笔记本会显示包含的笔记数量,如需要在笔记本之间移动笔记可通过拖放完成;Duplicate Note 操作可创建模板复制现有笔记;KRunner 插件;等等。
- kagi 翻译支持翻译到 LinkedIn Speak
职业社交网络 LinkedIn 的用户已经形成了一套独特的语言风格,这种风格被称为 LinkedIn Speak,其特点是能将任何琐事自我包装成积极向上的宏大叙事。举例来说:你失业了,但用 LinkedIn Speak 写出来变成了“开启了人生的新篇章”,去五百强企业做清洁工变成了荣幸加盟;等等。在中国,阿里巴巴的职场语言套话如“赋能、闭环、沉淀、生态”可能与 LinkedIn Speak 最为相似。现在,kagi 翻译工具加入了对 LinkedIn Speak 输出的支持,让任何人可以通过自然语言输出职场套话。