DIGEST · 2025-11-06

OrangeBot.AI Digest — 2025-11-06

54 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] (github.com)
  2. You should write an agent (fly.io)
  3. Show HN: I scraped 3B Goodreads reviews to train a better recommendation model (book.sv)
  4. Two billion email addresses were exposed (www.troyhunt.com)
  5. Swift on FreeBSD Preview (forums.swift.org)
  6. ICC ditches Microsoft 365 for openDesk (www.binnenlandsbestuur.nl)
  7. Kimi K2 Thinking, a SOTA open-source trillion-parameter reasoning model (moonshotai.github.io)
  8. FBI tries to unmask owner of archive.is (www.heise.de)
  9. Australia has so much solar that it's offering everyone free electricity (electrek.co)
  10. Cloudflare Tells U.S. Govt That Foreign Site Blocking Efforts Are Trade Barriers (torrentfreak.com)
  11. IKEA launches new smart home range with 21 Matter-compatible products (www.ikea.com)
  12. The trust collapse: Infinite AI content is awful (arnon.dk)
  13. Open Source Implementation of Apple's Private Compute Cloud (github.com)
  14. Eating stinging nettles (rachel.blog)
  15. AI Slop vs. OSS Security (devansh.bearblog.dev)

GitHub Trending(13)

  1. 666ghj / BettaFish

    微舆:人人可用的多Agent舆情分析助手,打破信息茧房,还原舆情原貌,预测未来走向,辅助决策!从0实现,不依赖任何框架。

  2. Skyvern-AI / skyvern

    Automate browser based workflows with AI

  3. nocobase / nocobase

    NocoBase is the most extensible AI-powered no-code/low-code platform for building business applications and enterprise solutions.

  4. mudler / LocalAI

    🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P and decentralized inference

  5. sst / opentui

    OpenTUI is a library for building terminal user interfaces (TUIs)

  6. imthenachoman / How-To-Secure-A-Linux-Server

    An evolving how-to guide for securing a Linux server.

  7. modelcontextprotocol / go-sdk

    The official Go SDK for Model Context Protocol servers and clients. Maintained in collaboration with Google.

  8. ad-on-is / rachoon

    🦝 Rachoon — A self-hostable way to handle invoices

  9. KotatsuApp / Kotatsu

    Manga reader for Android

  10. ggml-org / ggml

    Tensor library for machine learning

  11. FFmpeg / asm-lessons

    FFMPEG Assembly Language Lessons

  12. localstack / localstack

    💻 A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline

  13. PKUFlyingPig / cs-self-learning

    计算机自学指南

Hugging Face(11)

  1. Diffusion Language Models are Super Data Learners

    Under strictly controlled pre-training settings, we observe a Crossover: when unique data is limited, diffusion language models (DLMs) consistently surpass autoregressive (AR) models by training for more epochs. The crossover shifts later with more or higher-quality data, earlier with larger models, and persists across dense and sparse architectures. We attribute the gains to three compounding factors: (1) any-order modeling, (2) super-dense compute from iterative bidirectional denoising, and (3) built-in Monte Carlo augmentation; input or parameter noise improves AR under data constraint but cannot close the gap. At scale, a 1.7B DLM trained with a ~1.5T-token compute budget on 10B unique Python tokens overtakes an AR coder trained with strictly matched settings. In addition, a 1B-parameter DLM achieves > 56% accuracy on HellaSwag and > 33% on MMLU using only 1B tokens, without any special tricks, just by repeating standard pre-training data. We also show that rising validation cross-entropy does not imply degraded downstream performance in this regime.

  2. UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions

    Due to the lack of effective cross-modal modeling, existing open-source audio-video generation methods often exhibit compromised lip synchronization and insufficient semantic consistency. To mitigate these drawbacks, we propose UniAVGen, a unified framework for joint audio and video generation. UniAVGen is anchored in a dual-branch joint synthesis architecture, incorporating two parallel Diffusion Transformers (DiTs) to build a cohesive cross-modal latent space. At its heart lies an Asymmetric Cross-Modal Interaction mechanism, which enables bidirectional, temporally aligned cross-attention, thus ensuring precise spatiotemporal synchronization and semantic consistency. Furthermore, this cross-modal interaction is augmented by a Face-Aware Modulation module, which dynamically prioritizes salient regions in the interaction process. To enhance generative fidelity during inference, we additionally introduce Modality-Aware Classifier-Free Guidance, a novel strategy that explicitly amplifies cross-modal correlation signals. Notably, UniAVGen's robust joint synthesis design enables seamless unification of pivotal audio-video tasks within a single model, such as joint audio-video generation and continuation, video-to-audio dubbing, and audio-driven video synthesis. Comprehensive experiments validate that, with far fewer training samples (1.3M vs. 30.1M), UniAVGen delivers overall advantages in audio-video synchronization, timbre consistency, and emotion consistency.

  3. LEGO-Eval: Towards Fine-Grained Evaluation on Synthesizing 3D Embodied Environments with Tool Augmentation

    Despite recent progress in using Large Language Models (LLMs) for automatically generating 3D scenes, generated scenes often lack realistic spatial layouts and object attributes found in real-world environments. As this problem stems from insufficiently detailed, coarse-grained instructions, advancing 3D scene synthesis guided by more detailed, fine-grained instructions that reflect real-world environments becomes crucial. Without such realistic scenes, training embodied agents in unrealistic environments can lead them to learn priors that diverge significantly from real-world physics and semantics, degrading their performance when deployed. Thus, verifying the alignment between the fine-grained instruction and the generated scene is essential for effective learning. However, current evaluation methods, such as CLIPScore and vision-language models (VLMs), often fail to reliably assess such alignment. This shortcoming arises primarily from their shallow understanding of 3D scenes, which often leads to improperly grounded scene components. To address this, we introduce LEGO-Eval, an evaluation framework equipped with diverse tools designed to explicitly ground scene components, enabling more accurate alignment assessments. We also present LEGO-Bench, a benchmark of detailed instructions that specify complex layouts and attributes of real-world environments. Experiments demonstrate that LEGO-Eval outperforms VLM-as-a-judge by 0.41 F1 score in assessing scene-instruction alignment. Benchmarking with LEGO-Bench reveals significant limitations in current generation methods. Across all evaluated approaches, success rates reached at most 10% in generating scenes that fully align with fine-grained instructions.

  4. Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning

    Tabular data remain the predominant format for real-world applications. Yet, developing effective neural models for tabular data remains challenging due to heterogeneous feature types and complex interactions occurring at multiple scales. Recent advances in tabular in-context learning (ICL), such as TabPFN and TabICL, have achieved state-of-the-art performance comparable to gradient-boosted trees (GBTs) without task-specific fine-tuning. However, current architectures exhibit key limitations: (1) single-scale feature processing that overlooks hierarchical dependencies, (2) dense attention with quadratic scaling in table width, and (3) strictly sequential component processing that prevents iterative representation refinement and cross-component communication. To address these challenges, we introduce Orion-MSP, a tabular ICL architecture featuring three key innovations: (1) multi-scale processing to capture hierarchical feature interactions; (2) block-sparse attention combining windowed, global, and random patterns for scalable efficiency and long-range connectivity; and (3) a Perceiver-style memory enabling safe bidirectional information flow across components. Across diverse benchmarks, Orion-MSP matches or surpasses state-of-the-art performance while scaling effectively to high-dimensional tables, establishing a new standard for efficient tabular in-context learning. The model is publicly available at https://github.com/Lexsi-Labs/Orion-MSP .

  5. TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

    Tabular foundation models represent a growing paradigm in structured data learning, extending the benefits of large-scale pretraining to tabular domains. However, their adoption remains limited due to heterogeneous preprocessing pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the absence of standardized evaluation for deployment-oriented metrics such as calibration and fairness. We present TabTune, a unified library that standardizes the complete workflow for tabular foundation models through a single interface. TabTune provides consistent access to seven state-of-the-art models supporting multiple adaptation strategies, including zero-shot inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient fine-tuning (PEFT). The framework automates model-aware preprocessing, manages architectural heterogeneity internally, and integrates evaluation modules for performance, calibration, and fairness. Designed for extensibility and reproducibility, TabTune enables consistent benchmarking of adaptation strategies of tabular foundation models. The library is open source and available at https://github.com/Lexsi-Labs/TabTune .

  6. Kinematify: Open-Vocabulary Synthesis of High-DoF Articulated Objects

    A deep understanding of kinematic structures and movable components is essential for enabling robots to manipulate objects and model their own articulated forms. Such understanding is captured through articulated objects, which are essential for tasks such as physical simulation, motion planning, and policy learning. However, creating these models, particularly for objects with high degrees of freedom (DoF), remains a significant challenge. Existing methods typically rely on motion sequences or strong assumptions from hand-curated datasets, which hinders scalability. In this paper, we introduce Kinematify, an automated framework that synthesizes articulated objects directly from arbitrary RGB images or textual descriptions. Our method addresses two core challenges: (i) inferring kinematic topologies for high-DoF objects and (ii) estimating joint parameters from static geometry. To achieve this, we combine MCTS search for structural inference with geometry-driven optimization for joint reasoning, producing physically consistent and functionally valid descriptions. We evaluate Kinematify on diverse inputs from both synthetic and real-world environments, demonstrating improvements in registration and kinematic topology accuracy over prior work.

  7. MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity

    As reasoning models scale rapidly, the essential role of multimodality in human cognition has come into sharp relief, driving a growing need to probe vision-centric cognitive behaviors. Yet, existing multimodal benchmarks either overemphasize textual reasoning or fall short of systematically capturing vision-centric cognitive behaviors, leaving the cognitive capacity of MLLMs insufficiently assessed. To address this limitation, we introduce MME-CC (Multi-Modal Evaluation benchmark of Cognitive Capacity), a vision-grounded benchmark that organizes 11 representative reasoning tasks into three fundamental categories of visual information: spatial, geometric, and knowledge-based reasoning, and provides fine-grained analyses of MLLMs' cognitive capacity across these dimensions. Based on MME-CC, we conduct extensive experiments over 16 representative MLLMs. Our study reveals that closed-source models currently lead overall (e.g., 42.66 for Gemini-2.5-Pro vs. 30.45 for GLM-4.5V), while spatial and geometric reasoning remain broadly weak (less than or equal to 30%). We further identify common error patterns, including orientation mistakes, fragile cross-view identity persistence, and poor adherence to counterfactual instructions, and observe that Chain-of-Thought typically follows a three-stage process (extract -> reason -> verify) with heavy reliance on visual extraction. We hope this work catalyzes a shift toward treating the cognitive capacity of MLLMs as central to both evaluation and model design.

  8. LiveTradeBench: Seeking Real-World Alpha with Large Language Models

    Large language models (LLMs) achieve strong performance across benchmarks--from knowledge quizzes and math reasoning to web-agent tasks--but these tests occur in static settings, lacking real dynamics and uncertainty. Consequently, they evaluate isolated reasoning or problem-solving rather than decision-making under uncertainty. To address this, we introduce LiveTradeBench, a live trading environment for evaluating LLM agents in realistic and evolving markets. LiveTradeBench follows three design principles: (i) Live data streaming of market prices and news, eliminating dependence on offline backtesting and preventing information leakage while capturing real-time uncertainty; (ii) a portfolio-management abstraction that extends control from single-asset actions to multi-asset allocation, integrating risk management and cross-asset reasoning; and (iii) multi-market evaluation across structurally distinct environments--U.S. stocks and Polymarket prediction markets--differing in volatility, liquidity, and information flow. At each step, an agent observes prices, news, and its portfolio, then outputs percentage allocations that balance risk and return. Using LiveTradeBench, we run 50-day live evaluations of 21 LLMs across families. Results show that (1) high LMArena scores do not imply superior trading outcomes; (2) models display distinct portfolio styles reflecting risk appetite and reasoning dynamics; and (3) some LLMs effectively leverage live signals to adapt decisions. These findings expose a gap between static evaluation and real-world competence, motivating benchmarks that test sequential decision making and consistency under live uncertainty.

  9. The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute

    We revisit test-time scaling for language model reasoning and ask a fundamental question: at equal token budget and compute, is it better to run multiple independent chains in parallel, or to run fewer chains that iteratively refine through sequential steps? Through comprehensive evaluation across 5 state-of-the-art open source models and 3 challenging reasoning benchmarks, we find that sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy upto 46.7%. Further, we introduce inverse-entropy weighted voting, a novel training-free method to further boost the accuracy of sequential scaling. By weighing answers in proportion to the inverse entropy of their reasoning chains, we increase our success rate over parallel majority and establish it as the optimal test-time scaling strategy. Our findings fundamentally challenge the parallel reasoning orthodoxy that has dominated test-time scaling since Wang et al.'s self-consistency decoding (Wang et al., 2022), positioning sequential refinement as the robust default for modern LLM reasoning and necessitating a paradigm shift in how we approach inference-time optimization.

  10. Grounded Misunderstandings in Asymmetric Dialogue: A Perspectivist Annotation Scheme for MapTask

    Collaborative dialogue relies on participants incrementally establishing common ground, yet in asymmetric settings they may believe they agree while referring to different entities. We introduce a perspectivist annotation scheme for the HCRC MapTask corpus (Anderson et al., 1991) that separately captures speaker and addressee grounded interpretations for each reference expression, enabling us to trace how understanding emerges, diverges, and repairs over time. Using a scheme-constrained LLM annotation pipeline, we obtain 13k annotated reference expressions with reliability estimates and analyze the resulting understanding states. The results show that full misunderstandings are rare once lexical variants are unified, but multiplicity discrepancies systematically induce divergences, revealing how apparent grounding can mask referential misalignment. Our framework provides both a resource and an analytic lens for studying grounded misunderstanding and for evaluating (V)LLMs' capacity to model perspective-dependent grounding in collaborative dialogue.

  11. Let Multimodal Embedders Learn When to Augment Query via Adaptive Query Augmentation

    Query augmentation makes queries more meaningful by appending further information to the queries to find relevant documents. Current studies have proposed Large Language Model (LLM)-based embedders, which learn representation for embedding and generation for query augmentation in a multi-task manner by leveraging the generative capabilities of LLM. During inference, these jointly trained embedders have conducted query augmentation followed by embedding, showing effective results. However, augmenting every query leads to substantial embedding latency and query augmentation can be detrimental to performance for some queries. Also, previous methods have not been explored in multimodal environments. To tackle these problems, we propose M-Solomon, a universal multimodal embedder that can adaptively determine when to augment queries. Our approach first divides the queries of the training datasets into two groups at the dataset level. One includes queries that require augmentation and the other includes queries that do not. Then, we introduces a synthesis process that generates appropriate augmentations for queries that require them by leveraging a powerful Multimodal LLM (MLLM). Next, we present adaptive query augmentation. Through this step, M-Solomon can conduct query augmentation only when necessary by learning to generate synthetic augmentations with the prefix /augment for queries that demand them and to generate the simple string /embed for others. Experimental results showed that M-Solomon not only surpassed the baseline without augmentation by a large margin but also outperformed the baseline that always used augmentation, providing much faster embedding latency.

Solidot(15)

  1. Chrome 将于 2026 年 11 月移除对 XSLT 的支持

    Chrome 官方博客宣布将于 2026 年 11 月 17 日发布 v155 时移除对 Extensible Stylesheet Language Transformations(XSLT) 的支持。Google 的解释是有助于改进安全,称 Firefox 和 WebKit 项目也都有类似的计划。XML 文档适合计算机读取但不适合人类阅读,XSLT 的目的就是将 XML 文档转换成其它更适合人类阅读的格式如 HTML。Chrome、Firefox、Safari 等主流浏览器都支持客户端 XSLT 渲染,但仅限于 1999 年 的 1.0 版本,而不是 2017 年最新的 3.0 版本。Google 早在 2013 年就表达了移除 XSLT 的想法,但没有付诸实施。今年的 WHATWG 会议正式将移除 XSLT 的提议加入了讨论议程。Google 开发者认为浏览器使用的 XSLT 的代码库已经老化,易受内存安全漏洞的影响,而且 XSLT 使用率非常低,每 7891 次页面加载只有一次涉及客户端 XSLT。

  2. 天文学家发现有史以来最亮的黑洞光爆发

    根据发表在《Nature Astronomy》期间上的一项研究,当黑洞吞噬一颗质量至少为太阳 30 倍的恒星时,天文学家探测到了有史以来黑洞中最明亮的光爆发——其峰值亮度比太阳光高 10 万亿倍以上。当 2018年 天文学家第一次观测到这个天体时,他们并未意识到这是一个超级耀斑。在注意到天体亮度增强后,研究人员立即用帕罗玛山天文台的 200 英寸海耳望远镜瞄准了它。2023 年研究团队注意到,即使在 5 年后,这个耀斑仍然异常明亮。因此他们利用夏威夷凯克天文台进行了更深入的观测,结果显示,该天体距离地球约 300万千秒差距,即 100 亿光年。能在如此遥远的距离上看起来如此明亮,其发出的光必定是极其耀眼的。天文学家现在表示,这个耀斑的亮度是此前探测到的任何一次黑洞光爆发的 30 倍。研究人员认为合理的解释是,一颗大质量恒星在离黑洞过近时遭遇了厄运。当黑洞的引力将撕碎恒星时,它发出的光比之前要亮数十倍。他们还认为,由于耀斑还没有完全消失,这颗恒星可能还没有被黑洞完全吞噬。

  3. 43% 的 Z 世代偏爱 YouTube 和 TikTok 而非传统电视和流媒体

    Activate Consulting 的调查显示,43% 的 Z 世代偏爱 YouTube 和 TikTok 而不是传统电视或付费流媒体。全球媒体收入大幅增长,而传统电视收视率则在暴跌,每个人在各个平台上消费内容的时间平均超过 13 个小时,而多任务处理让人人过着“一天 32 小时”的生活。调查还显示,时长 1-2 分钟的微短剧正在快速流行,2800 万美国成年人(52% 年龄在 18-34 岁之间)在消费这种新娱乐形式。调查预计到 2029 年全球互联网和媒体收入将增加 3880 亿美元,人们每天看流媒体视频的时长将增至 4 小时 8 分钟,观看传统电视的时长将降至 1 小时 17 分钟。流媒体收入(包括广告和订阅)将每年增长 18-19%,而传统电视收入将每年下降 4-6%。

  4. 中国要求国家资助数据中心使用国产 AI 芯片

    中国发布指导方针,要求国家资助的新建数据中心使用国产 AI 芯片,完工进度低于 30% 的数据中心必须拆除所有已安装的外国芯片或取消采购计划;进度高于 30% 的数据中心则视个案而定。这可能是至今最强力的在关键基础中去除外国技术的举措。

  5. 法国将封禁希音网站

    法国政府表示,将禁止希音(Shein)在该国运营,此前在这家快时尚零售商的线上平台上发现了儿童外形的性爱玩偶和大量武器在售。内政部长 Laurent Nuñez 周三提交法律申请,要求屏蔽希音网站,“以最终制止希音的缺点对公共秩序所造成的严重危害”。财政部表示,此举是在希音的市集平台上发现有“大量”第三方卖家上架A类武器之后作出的。它确认,这些武器包括砍刀、斧头和指节铜套。巴黎检方此前刚刚以希音销售“充气娃娃”涉嫌构成儿童色情为由展开调查。在法国,传播和持有儿童色情均属违法。近期甚至有日本足球协会(JFA)高层因在飞机上观看儿童色情而被判罪,相关执法正在严格化。

  6. 世界经济论坛主席警告三大泡沫

    世界经济论坛主席 Borge Brende 周三表示,金融市场可能存在三大泡沫,世界应对此报以警惕。他表示,三大泡沫分别是:加密货币泡沫、AI 泡沫,以及债务泡沫。他说,政府的债务负担从未像现在这样沉重,这是自 1945 年以来从未有过的。Brende 表示 AI 虽然有可能大幅提高生产力,但也可能威胁到很多白领工作。最糟糕的情况下,有大量白领工资的大城市出现“铁锈地带”。

  7. 因 Mozilla 引入 AI 翻译日语本地化社区宣布关闭

    因 Mozilla 引入 AI 翻译,日语本地化社区 SUMO 宣布终止运作。机器人翻译 sumobot 是在 10 月 22 日被引入去帮助撰写日语知识库文章,SUMO 社区发现它不遵守翻译指南;不尊重现有日语用户的本地化;对所有已存档知识库文章立即批准其直接翻译的英文机器翻译;在更新后需要 72 小时才能获批,培训新贡献者的工作无法展开;它的工作不经过社区的批准、控制和沟通;逾 300 篇知识库文章被 sumobot 覆写。SUMO 社区认为这是对他们工作的大规模破坏,负责人宣布不再为 support.mozilla.org 贡献内容,要求禁止将其翻译用于训练机器人和 AI,要求从 AI 的学习数据中移除所有翻译。

  8. 新 HDR10+ Advanced 将改进运动平滑

    三星公布了 HDR10+ Advanced 的新细节,其中最令人感兴趣的功能是 HDR10+ Intelligent FRC(代表 frame rate conversion 或帧率转换),该功能旨在改进运动平滑。电视使用运动平滑分析视频的每一帧,判断如果视频的帧率匹配电视的刷新率那么如何插入额外的帧。一台启用运动平滑(或叫运动补偿或插帧)的电视,其刷新率是 60Hz,而电影的帧率是 24p,那么电视会尝试插帧让 24p 的电影看起来像是以 60p 的帧率拍摄的,让画面更平滑,消除抖动。但插帧技术的效果并不自然。Intelligent FRC 采用了一种更精细的运动平滑方法,允许内容创作者控制每个场景中使用的运动平滑级别,还允许根据环境光调整运动插值的强度。

  9. DRAM 芯片价格涨幅超过黄金

    2025 年第三季度 DRAM 芯片合约价比去年同期飙升 171.8%,涨幅超过了黄金。ADATA(威刚科技)董事长陈立白表示,2025 年第四季度将标志着 DRAM 牛市的开始,他预计 2026 年将出现严重的供应短缺。内存制造商已将生产重心转向 RDIMM 和 HBM 等数据中心专用内存芯片,受此影响,消费级 DDR5 芯片产量出现下降。一套 Corsair Vengeance RGB 双通道 DDR5 内存套装 7 月份的售价为 91 美元,如今在 Newegg 上的售价已涨至 183 美元​​。内存芯片的价格上涨也波及到了 NAND ​​闪存和硬盘领域。分析师预测价格上涨的趋势至少会持续四年,匹配企业与供应商三星和 SK 海力士签订的供货合同期限。

  10. YouTube 删除了逾 700 部以色列侵犯人权纪录片

    Google 旗下的视频网站 YouTube 证实,它遵守特朗普政府的命令,将遭到制裁的巴勒斯坦人权组织的账户及其视频全部删除。受影响的人权组织包括 Al-Haq,Al Mezan Center for Human Rights 和 Palestinian Centre for Human Rights,它们发布了逾 700 部以色列侵犯人权纪录片。海牙国际刑事法院今年初指控以色列总理内塔尼亚胡(Benjamin Netanyahu)和前国防部长加兰特(Yoav Gallant)在加沙犯有战争罪,特朗普政府随后制裁了国际刑事法院,微软则立即停用了国际刑事法院首席检察长的账户。今年九月,特朗普政府将巴勒斯坦人权组织列入制裁名单,理由是它们与国际刑事法院合作指控以色列官员犯有战争罪。YouTube 在 10 月初悄悄删除了相关组织的账户、频道和视频。Al Mezan 称其频道在没有收到提前通知的情况下于 10 月 7 日被关闭,Al-Haq 称其频道于 10 月 3 日关闭,理由是内容违反指导方针。人权组织谴责 YouTube 此举是助纣为虐。

  11. 安全公司员工被控发动勒索软件攻击

    美国检方指控三名安全公司员工“监守自盗”:DigitalMint 公司的 Kevin Tyler Martin 和另一名未公布名字的员工,以及 Sygnia 公司的前事件响应经理 Ryan Clifford Goldberg。三人被控入侵企业,窃取敏感数据,部署 ALPHV/BlackCat 开发的勒索软件。ALPHV/BlackCat 采用的是勒索软件即服务模式,它提供勒索软件,由加盟成员——在本案中就是三名安全公司员工——通过入侵企业网络部署勒索软件,然后赎金留出部分给加盟者。DigitalMint 公司从事的就是与勒索软件黑帮谈判赎金的业务,它的两名遭到起诉的员工既充当了谈判者,又充当了赎金的分成者。检方指控他们攻击了至少五家美国企业,从其中一家获得了逾 120 万美元的赎金。

  12. 逾七成开发者认为 Steam 是 PC 游戏市场的垄断者

    Atomik Research 在 2025 年 5月 18-22 日间调查了英美两国 306 位游戏行业高管,其中四分之三的受访者是 C 级别的高管,77% 的受访者来自人数逾 50 人的游戏工作室。研究发现,大多数工作室的收入逾四分之三来自 Steam。72% 的受访者认为 Steam 垄断了 PC 游戏市场。游戏开发商也开始利用其它平台如 Epic Game Store 和 Xbox PC Games store。48% 受访者在两个平台发行过游戏,10% 受访者使用过 GOG,8% 受访者使用过 Itch.io。32% 开发者以物理媒介发行过部分游戏。

  13. 特朗普再次提名 Jared Isaacman 为 NASA 局长

    美国总统特朗普再次提名亿万富翁、私人宇航员 Jared Isaacman 为 NASA 局长。他在声明中没有解释为什么今年五月撤回了对 Isaacman 的提名而如今又再次认为他能胜任。此前特朗普取消提名被认为与马斯克(Elon Musk)退出特朗普的核心圈子有关,Isaacman 是马斯克青睐的人选,曾搭乘 SpaceX 的飞船多次飞到地球轨道。特朗普在今年 7 月任命了运输部长 Sean Duffy 兼任 NASA 局长,但他最近的一系列言论和透露的 NASA 的计划引发了很多争议。与此同时,特朗普的幕僚则在继续推荐 Isaacman,而 Isaacman 也被发现与特朗普多次共餐,显示两人关系良好。美国政府目前处于停摆中,确认 Isaacman 的提名可能需要很长的时间。

  14. 天文学家可能发现了大爆炸之后的第一代恒星

    天文学家一直在寻找宇宙最初诞生的第一代恒星,如今他们或许终于找到它们的踪迹。美国俄亥俄州托雷多大学(University of Toledo)研究团队对韦伯太空望远镜(JWST)的引力透镜观测资料进行详细分析后,认为在遥远星系 LAP1-B 中,他们可能捕捉到了这些宇宙初生恒星的光芒。第一代恒星主要由氢与氦构成,含有微量的锂,这些都是大爆炸后遗留的原始元素。这些恒星极为罕见,寿命极短因此早已消亡,但它们遗留的微弱星光在穿越遥远距离后仍然有机会被捕捉。过去曾出现多次第一代恒星的候选对象,但最终都因为不符合理论预测的三大预测而被排除:形成于极低金属丰度的小型暗物质晕中;质量在 10 到 1,000 倍太阳质量之间;应该以小型星团的形式诞生,星团总质量数千倍太阳质量。LAP1-B 被认为同时满足三项条件。这个恒星系统形成于一个质量约为太阳 5,000 万倍的暗物质团块。其次,这些恒星质量介于太阳的 10 到 1,000 倍之间。最后,它们以总质量仅数千倍太阳质量的小型星团存在。

  15. 微软测试用 Copilot 取代桌面搜索框

    微软正将其 AI 助手 Copilot 集成到其每一个产品之中,而 Windows 操作系统则在更深入的整合 Copilot。微软在最新 Windows Insider Dev 和 Beta 版本中测试了用 Copilot 取代传统的桌面搜索框。Copilot 搜索框并没有默认启用,默认的搜索框显示了文字“搜索”,在用 Copilot 取代之后,搜索框会显示文字“问 Copilot 任何事”,用户可以输入 Copilot 提示词或搜索关键词。 目前的测试显示它并没有传统搜索功能强。