Weekly Digest — 2025-W37
117 unique stories (2025-09-08 → 2025-09-14), aggregated across 8 sources.
Hacker News(42)
- Chat Control Must Be Stopped (www.privacyguides.org)
- iPhone dumbphone (stopa.io)
- Firefox 32-bit Linux Support to End in 2026 (blog.mozilla.org)
- Signal Secure Backups (signal.org)
- OpenWrt: A Linux OS targeting embedded devices (openwrt.org)
- Will Amazon S3 Vectors kill vector databases or save them? (zilliz.com)
- Immunotherapy drug eliminates aggressive cancers in clinical trial (www.rockefeller.edu)
- Memory Integrity Enforcement (security.apple.com)
- iPhone Air (www.apple.com)
- Microsoft is officially sending employees back to the office (www.businessinsider.com)
- ICE is using fake cell towers to spy on people's phones (www.forbes.com)
- A new experimental Go API for JSON (go.dev)
GitHub Trending(31)
- emcie-co / parlant
LLM agents built for control. Designed for real-world use. Deployed in minutes.
- microsoft / ai-agents-for-beginners
12 Lessons to Get Started Building AI Agents
- zama-ai / fhevm
FHEVM, a full-stack framework for integrating Fully Homomorphic Encryption (FHE) with blockchain applications
- bytedance / UI-TARS-desktop
The Open-Source Multimodal AI Agent Stack: Connecting Cutting-Edge AI Models and Agent Infra
- openwrt / openwrt
This repository is a mirror of https://git.openwrt.org/openwrt/openwrt.git It is for reference only and is not active for check-ins. We will continue to accept Pull Requests here. They will be merged via staging trees then into openwrt.git.
- Kilo-Org / kilocode
Open Source AI coding assistant for planning, building, and fixing code. We frequently merge features from open-source projects like Roo Code and Cline, while building our own vision. Follow us: kilocode.ai/social
- Vector-Wangel / XLeRobot
XLeRobot: Practical Dual-Arm Mobile Home Robot for $660
- x1xhlol / system-prompts-and-models-of-ai-tools
FULL v0, Cursor, Manus, Augment Code, Same.dev, Lovable, Devin, Replit Agent, Windsurf Agent, VSCode Agent, Dia Browser, Xcode, Trae AI, Cluely & Orchids.app (And other Open Sourced) System Prompts, Tools & AI Models.
- Stirling-Tools / Stirling-PDF
#1 Locally hosted web application that allows you to perform various operations on PDF files
- vercel / examples
Enjoy our curated collection of examples and solutions. Use these patterns to build your own robust and scalable applications.
- Cinnamon / kotaemon
An open-source RAG-based tool for chatting with your documents.
- Physical-Intelligence / openpi
Hugging Face(19)
- Why Language Models Hallucinate
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.
- Symbolic Graphics Programming with Large Language Models
Large language models (LLMs) excel at program synthesis, yet their ability to produce symbolic graphics programs (SGPs) that render into precise visual content remains underexplored. We study symbolic graphics programming, where the goal is to generate an SGP from a natural-language description. This task also serves as a lens into how LLMs understand the visual world by prompting them to generate images rendered from SGPs. Among various SGPs, our paper sticks to scalable vector graphics (SVGs). We begin by examining the extent to which LLMs can generate SGPs. To this end, we introduce SGP-GenBench, a comprehensive benchmark covering object fidelity, scene fidelity, and compositionality (attribute binding, spatial relations, numeracy). On SGP-GenBench, we discover that frontier proprietary models substantially outperform open-source models, and performance correlates well with general coding capabilities. Motivated by this gap, we aim to improve LLMs' ability to generate SGPs. We propose a reinforcement learning (RL) with verifiable rewards approach, where a format-validity gate ensures renderable SVG, and a cross-modal reward aligns text and the rendered image via strong vision encoders (e.g., SigLIP for text-image and DINO for image-image). Applied to Qwen-2.5-7B, our method substantially improves SVG generation quality and semantics, achieving performance on par with frontier systems. We further analyze training dynamics, showing that RL induces (i) finer decomposition of objects into controllable primitives and (ii) contextual details that improve scene coherence. Our results demonstrate that symbolic graphics programming offers a precise and interpretable lens on cross-modal grounding.
- Set Block Decoding is a Language Model Inference Accelerator
Autoregressive next token prediction language models offer powerful capabilities but face significant challenges in practical deployment due to the high computational and memory costs of inference, particularly during the decoding stage. We introduce Set Block Decoding (SBD), a simple and flexible paradigm that accelerates generation by integrating standard next token prediction (NTP) and masked token prediction (MATP) within a single architecture. SBD allows the model to sample multiple, not necessarily consecutive, future tokens in parallel, a key distinction from previous acceleration methods. This flexibility allows the use of advanced solvers from the discrete diffusion literature, offering significant speedups without sacrificing accuracy. SBD requires no architectural changes or extra training hyperparameters, maintains compatibility with exact KV-caching, and can be implemented by fine-tuning existing next token prediction models. By fine-tuning Llama-3.1 8B and Qwen-3 8B, we demonstrate that SBD enables a 3-5x reduction in the number of forward passes required for generation while achieving same performance as equivalent NTP training.
- WildScore: Benchmarking MLLMs in-the-Wild Symbolic Music Reasoning
Recent advances in Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities across various vision-language tasks. However, their reasoning abilities in the multimodal symbolic music domain remain largely unexplored. We introduce WildScore, the first in-the-wild multimodal symbolic music reasoning and analysis benchmark, designed to evaluate MLLMs' capacity to interpret real-world music scores and answer complex musicological queries. Each instance in WildScore is sourced from genuine musical compositions and accompanied by authentic user-generated questions and discussions, capturing the intricacies of practical music analysis. To facilitate systematic evaluation, we propose a systematic taxonomy, comprising both high-level and fine-grained musicological ontologies. Furthermore, we frame complex music reasoning as multiple-choice question answering, enabling controlled and scalable assessment of MLLMs' symbolic music understanding. Empirical benchmarking of state-of-the-art MLLMs on WildScore reveals intriguing patterns in their visual-symbolic reasoning, uncovering both promising directions and persistent challenges for MLLMs in symbolic music reasoning and analysis. We release the dataset and code.
- LuxDiT: Lighting Estimation with Video Diffusion Transformer
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics. Learning-based approaches are constrained by the scarcity of ground-truth HDR environment maps, which are expensive to capture and limited in diversity. While recent generative models offer strong priors for image synthesis, lighting estimation remains difficult due to its reliance on indirect visual cues, the need to infer global (non-local) context, and the recovery of high-dynamic-range outputs. We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input. Trained on a large synthetic dataset with diverse lighting conditions, our model learns to infer illumination from indirect visual cues and generalizes effectively to real-world scenes. To improve semantic alignment between the input and the predicted environment map, we introduce a low-rank adaptation finetuning strategy using a collected dataset of HDR panoramas. Our method produces accurate lighting predictions with realistic angular high-frequency details, outperforming existing state-of-the-art techniques in both quantitative and qualitative evaluations.
- LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation
Recent research has been increasingly focusing on developing 3D world models that simulate complex real-world scenarios. World models have found broad applications across various domains, including embodied AI, autonomous driving, entertainment, etc. A more realistic simulation with accurate physics will effectively narrow the sim-to-real gap and allow us to gather rich information about the real world conveniently. While traditional manual modeling has enabled the creation of virtual 3D scenes, modern approaches have leveraged advanced machine learning algorithms for 3D world generation, with most recent advances focusing on generative methods that can create virtual worlds based on user instructions. This work explores such a research direction by proposing LatticeWorld, a simple yet effective 3D world generation framework that streamlines the industrial production pipeline of 3D environments. LatticeWorld leverages lightweight LLMs (LLaMA-2-7B) alongside the industry-grade rendering engine (e.g., Unreal Engine 5) to generate a dynamic environment. Our proposed framework accepts textual descriptions and visual instructions as multimodal inputs and creates large-scale 3D interactive worlds with dynamic agents, featuring competitive multi-agent interaction, high-fidelity physics simulation, and real-time rendering. We conduct comprehensive experiments to evaluate LatticeWorld, showing that it achieves superior accuracy in scene layout generation and visual fidelity. Moreover, LatticeWorld achieves over a 90times increase in industrial production efficiency while maintaining high creative quality compared with traditional manual production methods. Our demo video is available at https://youtu.be/8VWZXpERR18
- A Survey of Reinforcement Learning for Large Reasoning Models
In this paper, we survey recent advances in Reinforcement Learning (RL) for reasoning with Large Language Models (LLMs). RL has achieved remarkable success in advancing the frontier of LLM capabilities, particularly in addressing complex logical tasks such as mathematics and coding. As a result, RL has emerged as a foundational methodology for transforming LLMs into LRMs. With the rapid progress of the field, further scaling of RL for LRMs now faces foundational challenges not only in computational resources but also in algorithm design, training data, and infrastructure. To this end, it is timely to revisit the development of this domain, reassess its trajectory, and explore strategies to enhance the scalability of RL toward Artificial SuperIntelligence (ASI). In particular, we examine research applying RL to LLMs and LRMs for reasoning abilities, especially since the release of DeepSeek-R1, including foundational components, core problems, training resources, and downstream applications, to identify future opportunities and directions for this rapidly evolving area. We hope this review will promote future research on RL for broader reasoning models. Github: https://github.com/TsinghuaC3I/Awesome-RL-for-LRMs
- RewardDance: Reward Scaling in Visual Generation
Reward Models (RMs) are critical for improving generation models via Reinforcement Learning (RL), yet the RM scaling paradigm in visual generation remains largely unexplored. It primarily due to fundamental limitations in existing approaches: CLIP-based RMs suffer from architectural and input modality constraints, while prevalent Bradley-Terry losses are fundamentally misaligned with the next-token prediction mechanism of Vision-Language Models (VLMs), hindering effective scaling. More critically, the RLHF optimization process is plagued by Reward Hacking issue, where models exploit flaws in the reward signal without improving true quality. To address these challenges, we introduce RewardDance, a scalable reward modeling framework that overcomes these barriers through a novel generative reward paradigm. By reformulating the reward score as the model's probability of predicting a "yes" token, indicating that the generated image outperforms a reference image according to specific criteria, RewardDance intrinsically aligns reward objectives with VLM architectures. This alignment unlocks scaling across two dimensions: (1) Model Scaling: Systematic scaling of RMs up to 26 billion parameters; (2) Context Scaling: Integration of task-specific instructions, reference examples, and chain-of-thought (CoT) reasoning. Extensive experiments demonstrate that RewardDance significantly surpasses state-of-the-art methods in text-to-image, text-to-video, and image-to-video generation. Crucially, we resolve the persistent challenge of "reward hacking": Our large-scale RMs exhibit and maintain high reward variance during RL fine-tuning, proving their resistance to hacking and ability to produce diverse, high-quality outputs. It greatly relieves the mode collapse problem that plagues smaller models.
- 3D and 4D World Modeling: A Survey
World modeling has become a cornerstone in AI research, enabling agents to understand, represent, and predict the dynamic environments they inhabit. While prior work largely emphasizes generative methods for 2D image and video data, they overlook the rapidly growing body of work that leverages native 3D and 4D representations such as RGB-D imagery, occupancy grids, and LiDAR point clouds for large-scale scene modeling. At the same time, the absence of a standardized definition and taxonomy for ``world models'' has led to fragmented and sometimes inconsistent claims in the literature. This survey addresses these gaps by presenting the first comprehensive review explicitly dedicated to 3D and 4D world modeling and generation. We establish precise definitions, introduce a structured taxonomy spanning video-based (VideoGen), occupancy-based (OccGen), and LiDAR-based (LiDARGen) approaches, and systematically summarize datasets and evaluation metrics tailored to 3D/4D settings. We further discuss practical applications, identify open challenges, and highlight promising research directions, aiming to provide a coherent and foundational reference for advancing the field. A systematic summary of existing literature is available at https://github.com/worldbench/survey
- AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning
Developing autonomous LLM agents capable of making a series of intelligent decisions to solve complex, real-world tasks is a fast-evolving frontier. Like human cognitive development, agents are expected to acquire knowledge and skills through exploration and interaction with the environment. Despite advances, the community still lacks a unified, interactive reinforcement learning (RL) framework that can effectively train such agents from scratch -- without relying on supervised fine-tuning (SFT) -- across diverse and realistic environments. To bridge this gap, we introduce AgentGym-RL, a new framework to train LLM agents for multi-turn interactive decision-making through RL. The framework features a modular and decoupled architecture, ensuring high flexibility and extensibility. It encompasses a wide variety of real-world scenarios, and supports mainstream RL algorithms. Furthermore, we propose ScalingInter-RL, a training approach designed for exploration-exploitation balance and stable RL optimization. In early stages, it emphasizes exploitation by restricting the number of interactions, and gradually shifts towards exploration with larger horizons to encourage diverse problem-solving strategies. In this way, the agent develops more diverse behaviors and is less prone to collapse under long horizons. We perform extensive experiments to validate the stability and effectiveness of both the AgentGym-RL framework and the ScalingInter-RL approach. Our agents match or surpass commercial models on 27 tasks across diverse environments. We offer key insights and will open-source the complete AgentGym-RL framework -- including code and datasets -- to empower the research community in developing the next generation of intelligent agents.
- P3-SAM: Native 3D Part Segmentation
Segmenting 3D assets into their constituent parts is crucial for enhancing 3D understanding, facilitating model reuse, and supporting various applications such as part generation. However, current methods face limitations such as poor robustness when dealing with complex objects and cannot fully automate the process. In this paper, we propose a native 3D point-promptable part segmentation model termed P3-SAM, designed to fully automate the segmentation of any 3D objects into components. Inspired by SAM, P3-SAM consists of a feature extractor, multiple segmentation heads, and an IoU predictor, enabling interactive segmentation for users. We also propose an algorithm to automatically select and merge masks predicted by our model for part instance segmentation. Our model is trained on a newly built dataset containing nearly 3.7 million models with reasonable segmentation labels. Comparisons show that our method achieves precise segmentation results and strong robustness on any complex objects, attaining state-of-the-art performance. Our code will be released soon.
- Hunyuan-MT Technical Report
In this report, we introduce Hunyuan-MT-7B, our first open-source multilingual translation model, which supports bidirectional translation across 33 major languages and places a special emphasis on translation between Mandarin and several ethnic minority languages as well as dialects. Furthermore, to serve and address diverse translation scenarios and enhance model performance at test time, we introduce Hunyuan-MT-Chimera-7B, a translation model inspired by the slow thinking mode. This model integrates multiple outputs generated by the Hunyuan-MT-7B model under varying parameter settings, thereby achieving performance superior to that of conventional slow-thinking models based on Chain-of-Thought (CoT). The development of our models follows a holistic training process specifically engineered for multilingual translation, which begins with general and MT-oriented pre-training to build foundational capabilities, proceeds to Supervised Fine-Tuning (SFT) for task-specific adaptation, and culminates in advanced alignment through Reinforcement Learning (RL) and weak-to-strong RL. Through comprehensive experimentation, we demonstrate that both Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B significantly outperform all translation-specific models of comparable parameter size and most of the SOTA large models, particularly on the task of translation between Mandarin and minority languages as well as dialects. In the WMT2025 shared task (General Machine Translation), our models demonstrate state-of-the-art performance, ranking first in 30 out of 31 language pairs. This result highlights the robustness of our models across a diverse linguistic spectrum, encompassing high-resource languages such as Chinese, English, and Japanese, as well as low-resource languages including Czech, Marathi, Estonian, and Icelandic.
Solidot(25)
- Firefox ESR 115 将支持到 2026 年 3 月
微软已停止支持 Windows 7/8/8.1 操作系统,操作系统上最流行的应用——浏览器如 Google Chrome 和 Microsoft Edge 也都停止了对上述旧操作系统的支持,Mozilla 于 2023 年 7 月释出的 Firefox 115 ESR 是 Firefox 支持 Windows 7/8/8.1 的最后一个版本。Mozilla 开发者表示他们会在不同时间点进行评估以判断是否延长对 Windows 7/8/8.1 的支持时间,最新的评估是它计划继续为 Firefox 115 ESR 释出安全更新直至 2026 月 3 日。
- Windows 第三方工具允许用户禁用所有 AI 功能
Windows 11 第三方工具 Flyoobe 11 允许用户移除微软在操作系统中捆绑的臃肿软件。它最近释出了更新 v1.7,允许用户在安装操作系统后发现并禁用所有 AI 和 Copilot 功能。开发者称,最新版本能更深入挖掘 AI 在 Windows 11 中的嵌入方式。Flyoobe 托管在微软旗下的 GitHub 上,采用 MIT 许可证。
- 类似人类,每棵树都有独一无二的微生物组
森林是一个复杂、动态的生态系统,而树的内部也是如此。研究人员在《自然》期刊上发表了一项树干微生物组研究,发现树的木质组织除了树细胞外,还包含庞大的细菌群落和单细胞生物古细菌(archaea)。耶鲁大学的研究团队从美国东北部采集了 16 个树种的 150 多棵树的木芯样本,通过提取 DNA 去估算树干中的微生物数量。研究发现,树木的微生物组因物种而异。以生产枫糖浆而闻名的糖枫树含有更多的食糖细菌,用于制作葡萄酒桶的橡树含有一组已知有助于发酵的微生物。这些例子表明,树木微生物以某种意想不到的方式影响着我们的日常生活。树木微生物组也能表现出趋同演化,亲缘关系密切的树种可能拥有相似的微生物群落。
- 特斯拉改变了 Full Self-Driving 的意义,放弃承诺自动驾驶
特斯拉修改了 Full Self-Driving(FSD) 的意义,放弃原来承诺的自动驾驶或者叫无监督全自动驾驶。特斯拉自 2016 年起一直承诺其正在生产的汽车支持无监督自动驾驶能力。特斯拉 CEO 马斯克(Elon Musk)自 2018 年起每年都承诺到年底自动驾驶将会实现。但特斯拉后来承认 2016-2023 年生产的所有车型未配备实现自动驾驶所需的硬件。现在特斯拉表示 FSD 代表有监督的自动驾驶。
- 美国计划限制进口中国无人机
美国商务部计划发布规则,以国家安全理由限制或禁止进口中国无人机,以及来自中国等国重量超过 10000 磅的车辆。从中国进口的无人机占美国商用无人机销量的绝大部分,其中逾半数来自全球最大无人机制造商大疆。此前拜登政府已以国家安全为由,限制进口中国生产的汽车和卡车。去年 12 月拜登签署了一项法案,该法案可能为禁止大疆、道通智能在美国销售新型无人机铺平道路。
- Anthropic 向图书作者支付 15 亿美元和解侵权诉讼
上月底 AI 初创公司 Anthropic 与图书作者就版权侵犯集体诉讼达成和解,避免了潜在可能高达数十亿美元的侵权赔偿。法庭文件显示,Anthropic 从盗版电子书库 LibGen 和 PiLiMi 下载了多达 700 万电子书,在 2021 年和 2022 年创建了一个巨大的书库。本周图书作者披露 Anthropic 同意支付 15 亿美元并销毁为训练其 AI 模型而盗版的所有书籍副本。这一和解协议涉及的赔偿金额是美国版权诉讼史上最高的。协议涵盖 Anthropic 为训练 AI 而盗版的 50 万部作品。每位作者的每部作品将获得 3000 美元的赔偿。Anthropic 已同意了和解条款,但还需要获得法院批准。
- NASA 禁止中国公民参与其太空项目
NASA 禁止持有有效签证的中国公民进入其设施,参与其太空项目。此前以合同工或学生身份参与 NASA 项目的中国公民在 9 月 5 日发现无法访问所有 NASA 系统和设施,NASA 随后证实它以国家安全理由禁止中国公民,“NASA 已针对中国公民采取了内部措施,包括限制其进入我们的设施、接触材料和网络,以确保我们工作的安全。”中美两国目前都在竞争重返月球,而美国的登月计划 Artemis 正面临成本超支和延误等问题。
- 为什么 Netflix 难以制作出高质量电影
今年 2 月 Netflix 发布了一部饱受诟病的科幻片《The Electric State》,由明星 Chris Pratt 以及《怪奇物语》十一的扮演者 Millie Bobby Brown 主演。这部电影本应该很快被人遗忘,如果不是它的制作成本高达 3.2 亿美元的话。3.2 亿美元给 Netflix 带来了 Metacritic 综合评分 30/100,烂番茄综合评分 14%。为了填满其内容库,Netflix 投资制作了一系列低质量原创电影,它虽然也制作过一些高质量电影如《爱尔兰人》,但在影评网站如 IMDb、Letterboxd 和 TMDB 上,Netflix 电影的综合评分远低于院线电影。Netflix 曾与知名导演 Martin Scorsese、Alfonso Cuarón 和 Bradley Cooper 合作过,但大部分项目都是一次性的,知名导演很少会再次合作。今天很多导演拒绝与 Netflix 合作,即使 Netflix 提供更多的预算。《Weapons》的导演 Zach Cregger 拒绝了 Netflix 开出的 5000 万美元预算,而是选择了华纳兄弟的 3700 万美元预算和院线上映保证。Netflix 为 Emerald Fennell 和 Margot Robbie 改编自《呼啸山庄》的电影开出了 1.5 亿美元,但他们仍然选择了华纳兄弟的 8000 万美元预算和院线上映保证。
- 引力波证实霍金黑洞面积定理
激光干涉引力波天文台(LIGO)探测到两个黑洞之间异常强烈的碰撞,这使得物理学家能够验证斯蒂芬·霍金在1971 年提出的黑洞面积定理。该定理指出,当两个黑洞合并时产生的黑洞视界,即连光都无法逃脱黑洞控制的边界,其面积不能小于两个原始黑洞的面积之和。该定理与热力学第二定律相呼应。热力学第二定律指出,熵或物体内部的无序状态永远不会减少。黑洞合并扭曲了宇宙的结构,产生了被称为引力波的微小时空波动,能被引力波探测器观测到。最近的这次碰撞被命名为 GW250114,与 2015 年首次观测到的产生引力波的碰撞几乎完全相同。这两次黑洞的质量都在太阳质量的 30-40 倍之间,发生在 13 亿光年之外。这一次升级后的 LIGO 探测器灵敏度是 2015 年的 3 倍,因此它们能够以前所未有的细节捕获碰撞产生的波。这使得研究人员能够通过计算证实黑洞合并后视界面积确实变大,从而验证了霍金的定理。
- 法国配音演员指控《古墓丽影 4-6 重制版》使用 AI 合成其声音
古墓丽影系列的法语配音演员 Françoise Cadol 向《古墓丽影 4-6 重制版(Tomb Raider 4-6 Remastered)》开发商 Aspyr 发出停止通知函(cease and desist),指控 Aspyr 使用 AI 拷贝其声音但没有通知她或告诉游戏玩家。她形容此举是一种背叛,一种彻底的不尊重。除了法语,巴西和西班牙等地区的玩家也认为其语种的配音是由 AI 生成的,AI 合成了原配音演员的声音。巴西配音演员 Lene Bastos 收到了 Aspyr 的一封回信,它的调查显示外部开发合作伙伴在其不知情下使用生成式 AI 编辑原始声音,它表示自己没有授权这么做,对未能在审核中注意到该问题表示歉意。
- 小红书被要求限期整改
网信办在一份简短声明中宣布以女性为主的社交应用小红书被要求限期整改。“针对小红书平台未落实信息内容管理主体责任,在热搜榜单重点环节频繁呈现多条炒作明星个人动态和琐事类词条等不良信息内容,破坏网络生态问题,国家网信办指导上海市网信办,依据《网络信息内容生态治理规定》等有关规定,对小红书平台采取约谈、责令限期改正、警告、从严处理责任人等处置处罚措施。”
- 甲骨文股价飙升,Larry Ellison 成为新首富
甲骨文股价创下 1992 年以来最佳单日表现,股价飙升 36% 至 328 美元,市值增加 2440 亿美元接近一万亿美元大关。股价的上涨受益于 AI 驱动的云计算需求激增。股价飙升也使得公司创始人埃里森(Larry Ellison)的财富增加了 1000 亿美元,超过马斯克(Elon Musk)成为新的世界首富。