OrangeBot.AI Digest — 2025-12-23
59 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- What makes you senior (terriblesoftware.org)
- Lua 5.5 (lua.org)
- We replaced H.264 streaming with JPEG screenshots (and it worked better) (blog.helix.ml)
- Fabrice Bellard Releases MicroQuickJS (github.com)
- Meta is using the Linux scheduler designed for Valve's Steam Deck on its servers (www.phoronix.com)
- Test, don't just verify (alperenkeles.com)
- Ryanair fined €256M over ‘abusive strategy’ to limit ticket sales by OTAs (www.theguardian.com)
- Instant database clones with PostgreSQL 18 (boringsql.com)
- 10 years bootstrapped: €6.5M revenue with a team of 13 (www.datocms.com)
- Ask HN: What are the best engineering blogs with real-world depth?
- Adobe Photoshop 1.0 Source Code (1990) (computerhistory.org)
- iOS 26.3 brings AirPods-like pairing to third-party devices in EU under DMA (www.macrumors.com)
- Show HN: CineCLI – Browse and torrent movies directly from your terminal (github.com)
- Archivists posted the 60 minutes CECOT segment Bari Weiss killed (www.404media.co)
- 60 Minutes: Cecot
GitHub Trending(14)
- rendercv / rendercv
Typst-based CV/resume generator for academics and engineers
- exo-explore / exo
Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
- google / langextract
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.
- yichuan-w / LEANN
RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device.
- safety-research / bloom
bloom - evaluate any behavior immediately 🌸🌱
- stan-smith / FossFLOW
Make beautiful isometric infrastructure diagrams
- vendure-ecommerce / vendure
The most customizable commerce platform built with TypeScript, NestJS and GraphQL.
- cloudcommunity / Free-Certifications
A curated list of free courses with certifications. Also available at https://free-certifications.com/
- open-webui / open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
- davila7 / claude-code-templates
CLI tool for configuring and monitoring Claude Code
- makeplane / plane
🔥 🔥 🔥 Open Source JIRA, Linear, Monday, and Asana Alternative. Plane helps you track your issues, epics, and cycles the easiest way on the planet.
- xerrors / Yuxi-Know
结合LightRAG 知识库的知识图谱智能体平台。 An agent platform that integrates a LightRAG knowledge base and knowledge graphs. Build with LangChain v1 + Vue + FastAPI, support DeepAgents、MinerU PDF、Neo4j 、MCP.
- swisskyrepo / PayloadsAllTheThings
A list of useful payloads and bypass for Web Application Security and Pentest/CTF
- vllm-project / vllm-omni
A framework for efficient model inference with omni-modality models
Hugging Face(15)
- DataFlow: An LLM-Driven Framework for Unified Data Preparation and Workflow Automation in the Era of Data-Centric AI
The rapidly growing demand for high-quality data in Large Language Models (LLMs) has intensified the need for scalable, reliable, and semantically rich data preparation pipelines. However, current practices remain dominated by ad-hoc scripts and loosely specified workflows, which lack principled abstractions, hinder reproducibility, and offer limited support for model-in-the-loop data generation. To address these challenges, we present DataFlow, a unified and extensible LLM-driven data preparation framework. DataFlow is designed with system-level abstractions that enable modular, reusable, and composable data transformations, and provides a PyTorch-style pipeline construction API for building debuggable and optimizable dataflows. The framework consists of nearly 200 reusable operators and six domain-general pipelines spanning text, mathematical reasoning, code, Text-to-SQL, agentic RAG, and large-scale knowledge extraction. To further improve usability, we introduce DataFlow-Agent, which automatically translates natural-language specifications into executable pipelines via operator synthesis, pipeline planning, and iterative verification. Across six representative use cases, DataFlow consistently improves downstream LLM performance. Our math, code, and text pipelines outperform curated human datasets and specialized synthetic baselines, achieving up to +3\% execution accuracy in Text-to-SQL over SynSQL, +7\% average improvements on code benchmarks, and 1--3 point gains on MATH, GSM8K, and AIME. Moreover, a unified 10K-sample dataset produced by DataFlow enables base models to surpass counterparts trained on 1M Infinity-Instruct data. These results demonstrate that DataFlow provides a practical and high-performance substrate for reliable, reproducible, and scalable LLM data preparation, and establishes a system-level foundation for future data-centric AI development.
- The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding
Deep representations across modalities are inherently intertwined. In this paper, we systematically analyze the spectral characteristics of various semantic and pixel encoders. Interestingly, our study uncovers a highly inspiring and rarely explored correspondence between an encoder's feature spectrum and its functional role: semantic encoders primarily capture low-frequency components that encode abstract meaning, whereas pixel encoders additionally retain high-frequency information that conveys fine-grained detail. This heuristic finding offers a unifying perspective that ties encoder behavior to its underlying spectral structure. We define it as the Prism Hypothesis, where each data modality can be viewed as a projection of the natural world onto a shared feature spectrum, just like the prism. Building on this insight, we propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details via an innovative frequency-band modulator, enabling their seamless coexistence. Extensive experiments on ImageNet and MS-COCO benchmarks validate that our UAE effectively unifies semantic abstraction and pixel-level fidelity into a single latent space with state-of-the-art performance.
- Region-Constraint In-Context Generation for Instructional Video Editing
The In-context generation paradigm recently has demonstrated strong power in instructional image editing with both data efficiency and synthesis quality. Nevertheless, shaping such in-context learning for instruction-based video editing is not trivial. Without specifying editing regions, the results can suffer from the problem of inaccurate editing regions and the token interference between editing and non-editing areas during denoising. To address these, we present ReCo, a new instructional video editing paradigm that novelly delves into constraint modeling between editing and non-editing regions during in-context generation. Technically, ReCo width-wise concatenates source and target video for joint denoising. To calibrate video diffusion learning, ReCo capitalizes on two regularization terms, i.e., latent and attention regularization, conducting on one-step backward denoised latents and attention maps, respectively. The former increases the latent discrepancy of the editing region between source and target videos while reducing that of non-editing areas, emphasizing the modification on editing area and alleviating outside unexpected content generation. The latter suppresses the attention of tokens in the editing region to the tokens in counterpart of the source video, thereby mitigating their interference during novel object generation in target video. Furthermore, we propose a large-scale, high-quality video editing dataset, i.e., ReCo-Data, comprising 500K instruction-video pairs to benefit model training. Extensive experiments conducted on four major instruction-based video editing tasks demonstrate the superiority of our proposal.
- Infinite-Homography as Robust Conditioning for Camera-Controlled Video Generation
Recent progress in video diffusion models has spurred growing interest in camera-controlled novel-view video generation for dynamic scenes, aiming to provide creators with cinematic camera control capabilities in post-production. A key challenge in camera-controlled video generation is ensuring fidelity to the specified camera pose, while maintaining view consistency and reasoning about occluded geometry from limited observations. To address this, existing methods either train trajectory-conditioned video generation model on trajectory-video pair dataset, or estimate depth from the input video to reproject it along a target trajectory and generate the unprojected regions. Nevertheless, existing methods struggle to generate camera-pose-faithful, high-quality videos for two main reasons: (1) reprojection-based approaches are highly susceptible to errors caused by inaccurate depth estimation; and (2) the limited diversity of camera trajectories in existing datasets restricts learned models. To address these limitations, we present InfCam, a depth-free, camera-controlled video-to-video generation framework with high pose fidelity. The framework integrates two key components: (1) infinite homography warping, which encodes 3D camera rotations directly within the 2D latent space of a video diffusion model. Conditioning on this noise-free rotational information, the residual parallax term is predicted through end-to-end training to achieve high camera-pose fidelity; and (2) a data augmentation pipeline that transforms existing synthetic multiview datasets into sequences with diverse trajectories and focal lengths. Experimental results demonstrate that InfCam outperforms baseline methods in camera-pose accuracy and visual fidelity, generalizing well from synthetic to real-world data. Link to our project page:https://emjay73.github.io/InfCam/
- QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation
Dynamic Retrieval-Augmented Generation adaptively determines when to retrieve during generation to mitigate hallucinations in large language models (LLMs). However, existing methods rely on model-internal signals (e.g., logits, entropy), which are fundamentally unreliable because LLMs are typically ill-calibrated and often exhibit high confidence in erroneous outputs. We propose QuCo-RAG, which shifts from subjective confidence to objective statistics computed from pre-training data. Our method quantifies uncertainty through two stages: (1) before generation, we identify low-frequency entities indicating long-tail knowledge gaps; (2) during generation, we verify entity co-occurrence in the pre-training corpus, where zero co-occurrence often signals hallucination risk. Both stages leverage Infini-gram for millisecond-latency queries over 4 trillion tokens, triggering retrieval when uncertainty is high. Experiments on multi-hop QA benchmarks show QuCo-RAG achieves EM gains of 5--12 points over state-of-the-art baselines with OLMo-2 models, and transfers effectively to models with undisclosed pre-training data (Llama, Qwen, GPT), improving EM by up to 14 points. Domain generalization on biomedical QA further validates the robustness of our paradigm. These results establish corpus-grounded verification as a principled, practically model-agnostic paradigm for dynamic RAG. Our code is publicly available at https://github.com/ZhishanQ/QuCo-RAG.
- Can LLMs Estimate Student Struggles? Human-AI Difficulty Alignment with Proficiency Simulation for Item Difficulty Prediction
Accurate estimation of item (question or task) difficulty is critical for educational assessment but suffers from the cold start problem. While Large Language Models demonstrate superhuman problem-solving capabilities, it remains an open question whether they can perceive the cognitive struggles of human learners. In this work, we present a large-scale empirical analysis of Human-AI Difficulty Alignment for over 20 models across diverse domains such as medical knowledge and mathematical reasoning. Our findings reveal a systematic misalignment where scaling up model size is not reliably helpful; instead of aligning with humans, models converge toward a shared machine consensus. We observe that high performance often impedes accurate difficulty estimation, as models struggle to simulate the capability limitations of students even when being explicitly prompted to adopt specific proficiency levels. Furthermore, we identify a critical lack of introspection, as models fail to predict their own limitations. These results suggest that general problem-solving capability does not imply an understanding of human cognitive struggles, highlighting the challenge of using current models for automated difficulty prediction.
- WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion
Generating long-range, geometrically consistent video presents a fundamental dilemma: while consistency demands strict adherence to 3D geometry in pixel space, state-of-the-art generative models operate most effectively in a camera-conditioned latent space. This disconnect causes current methods to struggle with occluded areas and complex camera trajectories. To bridge this gap, we propose WorldWarp, a framework that couples a 3D structural anchor with a 2D generative refiner. To establish geometric grounding, WorldWarp maintains an online 3D geometric cache built via Gaussian Splatting (3DGS). By explicitly warping historical content into novel views, this cache acts as a structural scaffold, ensuring each new frame respects prior geometry. However, static warping inevitably leaves holes and artifacts due to occlusions. We address this using a Spatio-Temporal Diffusion (ST-Diff) model designed for a "fill-and-revise" objective. Our key innovation is a spatio-temporal varying noise schedule: blank regions receive full noise to trigger generation, while warped regions receive partial noise to enable refinement. By dynamically updating the 3D cache at every step, WorldWarp maintains consistency across video chunks. Consequently, it achieves state-of-the-art fidelity by ensuring that 3D logic guides structure while diffusion logic perfects texture. Project page: https://hyokong.github.io/worldwarp-page/{https://hyokong.github.io/worldwarp-page/}.
- LoGoPlanner: Localization Grounded Navigation Policy with Metric-aware Visual Geometry
Trajectory planning in unstructured environments is a fundamental and challenging capability for mobile robots. Traditional modular pipelines suffer from latency and cascading errors across perception, localization, mapping, and planning modules. Recent end-to-end learning methods map raw visual observations directly to control signals or trajectories, promising greater performance and efficiency in open-world settings. However, most prior end-to-end approaches still rely on separate localization modules that depend on accurate sensor extrinsic calibration for self-state estimation, thereby limiting generalization across embodiments and environments. We introduce LoGoPlanner, a localization-grounded, end-to-end navigation framework that addresses these limitations by: (1) finetuning a long-horizon visual-geometry backbone to ground predictions with absolute metric scale, thereby providing implicit state estimation for accurate localization; (2) reconstructing surrounding scene geometry from historical observations to supply dense, fine-grained environmental awareness for reliable obstacle avoidance; and (3) conditioning the policy on implicit geometry bootstrapped by the aforementioned auxiliary tasks, thereby reducing error propagation.We evaluate LoGoPlanner in both simulation and real-world settings, where its fully end-to-end design reduces cumulative error while metric-aware geometry memory enhances planning consistency and obstacle avoidance, leading to more than a 27.3\% improvement over oracle-localization baselines and strong generalization across embodiments and environments. The code and models have been made publicly available on the https://steinate.github.io/logoplanner.github.io/{project page}.
- UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models
Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, their effectiveness heavily relies on supervised training with extensive labeled (e.g., question-answering pairs) or unlabeled datasets (e.g., code snippets), which are often expensive and difficult to obtain at scale. To address this limitation, this paper introduces a method IPC, an unsupervised framework that leverages Internal Probing of LLMs for Code generation without any external corpus, even unlabeled code snippets. We introduce the problem space probing, test understanding probing, solution space probing, and knowledge consolidation and reinforcement to probe the internal knowledge and confidence patterns existing in LLMs. Further, IPC identifies reliable code candidates through self-consistency mechanisms and representation-based quality estimation to train UCoder (coder with unsupervised learning). We validate the proposed approach across multiple code benchmarks, demonstrating that unsupervised methods can achieve competitive performance compared to supervised approaches while significantly reducing the dependency on labeled data and computational resources. Analytic experiments reveal that internal model states contain rich signals about code quality and correctness, and that properly harnessing these signals enables effective unsupervised learning for code generation tasks, opening new directions for training code LLMs in resource-constrained scenarios.
- GenEnv: Difficulty-Aligned Co-Evolution Between LLM Agents and Environment Simulators
Training capable Large Language Model (LLM) agents is critically bottlenecked by the high cost and static nature of real-world interaction data. We address this by introducing GenEnv, a framework that establishes a difficulty-aligned co-evolutionary game between an agent and a scalable, generative environment simulator. Unlike traditional methods that evolve models on static datasets, GenEnv instantiates a dataevolving: the simulator acts as a dynamic curriculum policy, continuously generating tasks specifically tailored to the agent's ``zone of proximal development''. This process is guided by a simple but effective α-Curriculum Reward, which aligns task difficulty with the agent's current capabilities. We evaluate GenEnv on five benchmarks, including API-Bank, ALFWorld, BFCL, Bamboogle, and TravelPlanner. Across these tasks, GenEnv improves agent performance by up to +40.3\% over 7B baselines and matches or exceeds the average performance of larger models. Compared to Gemini 2.5 Pro-based offline data augmentation, GenEnv achieves better performance while using 3.3times less data. By shifting from static supervision to adaptive simulation, GenEnv provides a data-efficient pathway for scaling agent capabilities.
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding
Diffusion Large Language Models (dLLMs) have demonstrated significant potential for high-speed inference. However, current confidence-driven decoding strategies are constrained by limited parallelism, typically achieving only 1--3 tokens per forward pass (TPF). In this work, we identify that the degree of parallelism during dLLM inference is highly sensitive to the Token Filling Order (TFO). Then, we introduce Lookahead PArallel Decoding LoPA, a training-free, plug-and-play algorithm, to identify a superior TFO and hence accelerate inference. LoPA concurrently explores distinct candidate TFOs via parallel branches, and selects the one with the highest potential for future parallelism based on branch confidence. We apply LoPA to the state-of-the-art D2F model and observe a substantial enhancement in decoding efficiency. Notably, LoPA increases the TPF of D2F-Dream to 10.1 on the GSM8K while maintaining performance superior to the Dream baseline. Furthermore, to facilitate this unprecedented degree of parallelism, we develop a specialized multi-device inference system featuring Branch Parallelism (BP), which achieves a single-sample throughput of 1073.9 tokens per second under multi-GPU deployment. The code is available at https://github.com/zhijie-group/LoPA.
- Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs
Exploration capacity shapes both inference-time performance and reinforcement learning (RL) training for large (vision-) language models, as stochastic sampling often yields redundant reasoning paths with little high-level diversity. This paper proposes Reasoning Palette, a novel latent-modulation framework that endows the model with a stochastic latent variable for strategic contextualization, guiding its internal planning prior to token generation. This latent context is inferred from the mean-pooled embedding of a question-answer pair via a variational autoencoder (VAE), where each sampled latent potentially encodes a distinct reasoning context. During inference, a sampled latent is decoded into learnable token prefixes and prepended to the input prompt, modulating the model's internal reasoning trajectory. In this way, the model performs internal sampling over reasoning strategies prior to output generation, which shapes the style and structure of the entire response sequence. A brief supervised fine-tuning (SFT) warm-up phase allows the model to adapt to this latent conditioning. Within RL optimization, Reasoning Palette facilitates structured exploration by enabling on-demand injection for diverse reasoning modes, significantly enhancing exploration efficiency and sustained learning capability. Experiments across multiple reasoning benchmarks demonstrate that our method enables interpretable and controllable control over the (vision-) language model's strategic behavior, thereby achieving consistent performance gains over standard RL methods.
- StoryMem: Multi-shot Long Video Storytelling with Memory
Visual storytelling requires generating multi-shot videos with cinematic quality and long-range consistency. Inspired by human memory, we propose StoryMem, a paradigm that reformulates long-form video storytelling as iterative shot synthesis conditioned on explicit visual memory, transforming pre-trained single-shot video diffusion models into multi-shot storytellers. This is achieved by a novel Memory-to-Video (M2V) design, which maintains a compact and dynamically updated memory bank of keyframes from historical generated shots. The stored memory is then injected into single-shot video diffusion models via latent concatenation and negative RoPE shifts with only LoRA fine-tuning. A semantic keyframe selection strategy, together with aesthetic preference filtering, further ensures informative and stable memory throughout generation. Moreover, the proposed framework naturally accommodates smooth shot transitions and customized story generation applications. To facilitate evaluation, we introduce ST-Bench, a diverse benchmark for multi-shot video storytelling. Extensive experiments demonstrate that StoryMem achieves superior cross-shot consistency over previous methods while preserving high aesthetic quality and prompt adherence, marking a significant step toward coherent minute-long video storytelling.
- MobileWorld: Benchmarking Autonomous Mobile Agents in Agent-User Interactive, and MCP-Augmented Environments
Among existing online mobile-use benchmarks, AndroidWorld has emerged as the dominant benchmark due to its reproducible environment and deterministic evaluation; however, recent agents achieving over 90% success rates indicate its saturation and motivate the need for a more challenging benchmark. In addition, its environment lacks key application categories, such as e-commerce and enterprise communication, and does not reflect realistic mobile-use scenarios characterized by vague user instructions and hybrid tool usage. To bridge this gap, we introduce MobileWorld, a substantially more challenging benchmark designed to better reflect real-world mobile usage, comprising 201 tasks across 20 applications, while maintaining the same level of reproducible evaluation as AndroidWorld. The difficulty of MobileWorld is twofold. First, it emphasizes long-horizon tasks with cross-application interactions: MobileWorld requires nearly twice as many task-completion steps on average (27.8 vs. 14.3) and includes far more multi-application tasks (62.2% vs. 9.5%) compared to AndroidWorld. Second, MobileWorld extends beyond standard GUI manipulation by introducing novel task categories, including agent-user interaction and MCP-augmented tasks. To ensure robust evaluation, we provide snapshot-based container environment and precise functional verifications, including backend database inspection and task callback APIs. We further develop a planner-executor agentic framework with extended action spaces to support user interactions and MCP calls. Our results reveal a sharp performance drop compared to AndroidWorld, with the best agentic framework and end-to-end model achieving 51.7% and 20.9% success rates, respectively. Our analysis shows that current models struggle significantly with user interaction and MCP calls, offering a strategic roadmap toward more robust, next-generation mobile intelligence.
- Does It Tie Out? Towards Autonomous Legal Agents in Venture Capital
Before closing venture capital financing rounds, lawyers conduct diligence that includes tying out the capitalization table: verifying that every security (for example, shares, options, warrants) and issuance term (for example, vesting schedules, acceleration triggers, transfer restrictions) is supported by large sets of underlying legal documentation. While LLMs continue to improve on legal benchmarks, specialized legal workflows, such as capitalization tie-out, remain out of reach even for strong agentic systems. The task requires multi-document reasoning, strict evidence traceability, and deterministic outputs that current approaches fail to reliably deliver. We characterize capitalization tie-out as an instance of a real-world benchmark for legal AI, analyze and compare the performance of existing agentic systems, and propose a world model architecture toward tie-out automation-and more broadly as a foundation for applied legal intelligence.
Solidot(15)
- 皮肤和内脏使用不同的方式感知冷
人体不同部位对冷的感知存在差异。皮肤主要通过 TRPM8 离子通道感知低温,这种通道专门用于感知寒冷的环境条件;而在体内,诸如肺和胃等器官,主要依靠分子传感器 TRPA1 感知温度变化。这解释了为何体表和体内感受到的寒冷大不相同。你或许曾感受过,寒风吹过皮肤的冷和吸入冰冷空气或吞咽冷饮时的冷截然不同。这是因为每种组织类型通过激活各自的生物途径感知温度变化。研究结果表明,温度感知与身体各部位的特定生理功能紧密相关,内脏器官感知寒冷的分子机制与皮肤不同。
- Spotify 称反版权极端分子抓取了其音乐库
此前专注于存档文本的影子图书馆“安娜的档案”发布了音乐串流服务 Spotify 的存档,容量 300TB,包含 2.56 亿首歌和 1.86 亿个 ISRC 码。安娜的档案称这是世界最大的公开音乐元数据库。Spotify 存档共有 8600 万个音乐文件,约占总播放量 99.6%,截止日期 2025 年 7 月。数据显示,逾七成的歌几乎没有几个人听过;流行度前三歌曲总播放量超过排名后 2000 万至 1 亿的歌曲的播放量总和。Spotify 将发布存档的人称为是反版权极端分子,表示正对此展开调查。存档者可能使用了公开的 Web API 抓取元数据和绕过 DRM 访问音频文件。Spotify 坚称这不是一次黑客入侵,用户数据没有受到影响。
- 三星将为其冰箱集成 Google Gemini AI
无论你需要不需要,AI 都将进入到你的生活里,其中包括厨房。三星准备在其冰箱产品中集成 Google 的 Gemini AI,识别客户的饮食习惯。在下个月举行的 CES 2026 展会上,三星计划展示新款的 Bespoke AI Refrigerator 冰箱,内置摄像头,能在 Google Gemini 的帮助下自动识别食物,包括放在无标签容器的剩菜剩饭。AI 冰箱将不需要输入信息就能维持食物库存的更新,跟踪食物的加入和移除,并根据剩余食物提供建议。这将是 Google Gemini AI 首次集成到冰箱中,标志着生成式 AI 的应用范围已从手机和笔记本电脑推广到家用智能电器。
- 微软计划到 2030 年用 Rust 代码替代所有 C 和 C++ 代码
微软计划到 2030 年用 Rust 代码替换所有 C 和 C++ 代码,借助 AI 辅助工具完成这一大规模的代码重构。微软杰出工程师 Galen Hunt 在 LinkedIn 上称,将结合 AI 和算法用 Rust 重写微软最大的代码库,期望一名工程师一个月能完成一百万行代码。Hunt 表示正在招聘一名有至少三年系统级代码开发经验的软件工程师协助完成这项工作,这名工程师最好具有编译器、数据库或操作系统实现经验。
- 记事本支持表格
微软为曾经以简约著称的记事本字处理软件加入了越来越多的新功能。微软刚刚向 Windows 11 Canary 和 Dev 频道的测试者释出了新版本的记事本,加入了轻量级的表格格式功能,允许用户在记事本文档内插入和组织表格。用户可以通过格式工具栏或直接通过 Markdown 语法添加表格。用户可通过右键上下文菜单或表格菜单快速添加或删除行和列。其它更新包括改进记事本的 AI 功能,但使用 AI 功能需要微软账号。
- EA 高管、《使命召唤》系列联合创始人 Vince Zampella 因车祸去世
对 FPS 游戏行业有着举足轻重影响力的传奇开发者 Vince Zampella 因驾驶法拉利出车祸而去世,享年 55 岁。事故发生在周日 12:45 p.m.,地点是洛杉矶 San Gabriel 山的一个隧道口附近,他驾驶着一辆 2026 款 Ferrari 296 GTS 跑车高速驶出隧道,失控撞上混凝土护栏,汽车基本解体,司机当场死亡,一名乘客被甩出车外,送往医院后不治身亡。Vince Zampella 与 Jason West 最早为 EA 开发了 FPS 游戏《荣誉勋章:联合袭击》,因 EA 决定将《荣誉勋章》系列转为内部开发,他们创办了 Infinity Ward 与动视签署协议开发代号为《MOH Killer》的新游戏,也就是后来影响了整个游戏行业的《使命召唤》系列。《使命召唤》是过去二十年最畅销的游戏之一,Infinity Ward 为动视制作了四部《使命召唤》:《使命召唤 1/2》、《现代战争 1/2》。《现代战争2》之后,Zampella 和 West 因与动视在收入分成上的分歧而遭到解雇。两人随后创办了 Respawn Entertainment,重新与 EA 合作。Respawn 开发了多款备受好评的游戏,包括《泰坦陨落》, 《泰坦陨落 2》、《Apex Legends》, 《星球大战绝地:陨落的武士团》以及《星球大战绝地:幸存者》。Zampella 后被提拔为集团总经理负责《战地》系列。
- 韦伯望远镜发现流浪超大质量黑洞
韦伯望远镜发现了一颗正在高速飞行的流浪超大质量黑洞。该黑洞质量是太阳的 1000 万倍,飞行速度 1000 公里/秒,它是首个被确认如脱缰之马飞奔的超大质量黑洞,是迄今探测到的速度最快的天体之一。黑洞前进之路的前方还形成了一个星系规模的“弓形激波”,后方拖着一条长 20 万光年的尾巴,其中的气体在不断聚集引发恒星形成。天文学家表示,它是至今发现的第一个被从原星系驱逐出去的流浪黑洞,驱逐黑洞的力量无比巨大。研究报告发表在 arXiv 上。
- 二氧化碳浓度上升食物热量增加但营养密度下降
二氧化碳是植物的食物之一,大气中的二氧化碳浓度增加被认为能促使植物生长。荷兰 Leiden 大学的一项比较研究发现,二氧化碳浓度上升导致基于植物的食物热量更高,但营养密度降低,且可能毒性更大。研究人员发现植物产量增加的同时,锌含量下降,而铅含量却上升。这一发现让他们感到吃惊。
- 室内日光浴会加速皮肤老化
根据发表在《Science Advances》期刊上的一项研究,室内日光浴或使用人造紫外线晒黑皮肤,会加速皮肤细胞突变,可能会增加未来癌症的风险。研究人员发现,30 多岁和 40 多岁的日光浴者皮肤上的细胞突变数量比 70 多岁和 80 多岁的普通人群还要多。换句话说,从基因层面看,日光浴者的皮肤似乎老了几十岁。根据美国癌症协会的数据,皮肤癌是美国最常见的癌症,其中最致命的是黑色素瘤。每年约有 11,000 名美国人死于黑色素瘤,主要原因是暴露于紫外线辐射。紫外线辐射天然存在于阳光中,也存在于日光浴床等人工光源中。随着日光浴床使用量的增加,黑色素瘤的发病率随之上升,尤其对年轻女性造成了不成比例的影响,她们是日光浴的主要客户。许多国家已禁止使用日光浴床,世界卫生组织也将其列为一级致癌物,与烟草烟雾和石棉同属一类。
- 韦伯发现了氦碳构成大气层的系外行星
天文学家使用韦伯望远镜发现了一颗不同寻常的系外行星 PSR J2322-2650b,其大气层主要由氦和碳构成。碳云在大气层深处可能会凝结成钻石。PSR J2322-2650b 的母星是一颗脉冲星,脉冲星会释放出伽马射线等高能粒子,在韦伯望远镜的红线观测下是不可见的,因此天文学家能了解到行星的细节。PSR J2322-2650b 距离脉冲星非常近仅为 100 万英里,围绕一周仅 7.8 小时,在脉冲星的强大引力下这颗木星质量的行星被拉成柠檬形状,其表面温度从 600-2000 摄氏度。
- 自然光可能有助于糖尿病患者控制血糖
人体细胞和组织遵循昼夜节律,即 24 小时的代谢活动周期,这种周期调节了诸如血糖水平等生理机制。过去研究表明,夜间暴露在人造光下会扰乱这些节律,导致血糖水平升高,而多在户外晒太阳,似乎能增强身体对有助于控制血糖水平激素的反应。 研究人员招募了 13 名平均年龄为 70 岁的 2 型糖尿病患者,让他们在一个房间里待了 4.5 天。期间参与者们继续服用他们惯常的糖尿病药物,仅在每天上午 8-17 点通过大窗户接触自然光。对照实验只有人造光。结果表明,在自然光照的一周里,参与者血糖水平保持在健康范围内的时间占 50%。而在人工光照实验中,这一比例仅为 43%。研究人员认为需要进一步的研究去验证。
- 订阅陷阱时代
一款售价 169 美元的闹钟提供了特殊灯光效果和音效,但客户需要每月支付 4.99 美元。欢迎来到订阅陷阱时代,越来越多你付费购买的东西,会反过来控制或束缚你。订阅模式对企业非常有利,因为能带来稳定的收入来源。但对消费者来说,它弊大于利,原因一样:你必须不断向企业付费。每月 5 美元或更高的订阅费用会一直带进到我们的坟墓。研究表明 2023 年消费者平均每月在订阅服务上花费 219 美元。2024 年全球订阅市场规模估计达到 4920 亿美元。到 2033 年,这一数字预计将增长三倍。公司辩称,订阅模式不只是为了增加其利润,消费者也能受益。比如惠普打印机的 Instant Ink 订阅服务承诺消费者不用再担心墨水用完。但一旦用户取消订阅,打印机会扣押只用了一半的墨盒。需要付费才能继续用。惠普还使用 DRM 限制第三方墨盒使用。
- FSF 警告任天堂新 DRM 允许它远程让游戏机变砖
自由软件基金会(FSF)对任天堂最近更新的 DRM 发出警告,它允许任天堂自行决定、单方面撤销玩家对游戏、安全更新和互联网访问的权利。任天堂的新用户协议明确,如果玩家没有遵守其限制,任天堂可能会让玩家的 Nintendo Account Services 和/或相关任天堂设备永久无法使用。任天堂设定的限制包括:以任何方式篡改硬件或软件;尝试运行备份游戏;运行“二手”游戏;使用第三方游戏或配件。任天堂并不只是在口头警告,在 Switch 2 发售不到一个月就有玩家的游戏机受到限制的报道。
- 苹果和 Google 建议持签证员工不要出国
根据内部备忘录,由于特朗普政府加强签证审查,苹果和 Google 建议持签证员工不要出国旅行,以免返回时被困。美国领事馆和大使馆报告,由于国土安全部新规,旅客需要接受最长五年的社媒历史记录审查,签证预约出现长时间延误,有时甚至长达数月。苹果和 Google 雇佣了逾 30 万名员工,高度依赖高技能外籍员工。鉴于审查加强以及签证预约延误,两家公司通知部分员工,建议避免出国旅行,尽可能留在美国。苹果的合作律所 Fragomen 说,如果无法推迟旅行,那么员工需要提前联系苹果移民部门和 Fragomen 律所讨论相关风险。
- 旧金山断电交通信号中断导致 Waymo 无人出租车陷入困境
旧金山周六晚上九点因变电站起火发生了断电事故,电力公司称有十多万客户受到影响。断电也影响了交通信号,在一片漆黑没有任何交通信号的情况下,Waymo 公司的无人驾驶出租车采取了最谨慎的做法:以蜗牛般的速度缓慢行驶,以至于挡住了人类司机的路。根据交通法,交通信号故障时汽车需要遵守“四向停车(four-way stop)”,在停车标志前停车观察后通行。Waymo 遵守了这一规定,但行动过于缓慢以至于造成了堵车,用户在社交媒体上分享了 Waymo 出租车停在路口的照片和视频。