OrangeBot.AI Digest — 2025-07-25
72 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Windsurf employee #2: I was given a payout of only 1% what my shares where worth (twitter.com)
- Vanilla JavaScript support for Tailwind Plus (tailwindcss.com)
- Internet Archive is now a federal depository library (www.kqed.org)
- Steam, Itch.io are pulling ‘porn’ games. Critics say it's a slippery slope (www.wired.com)
- Why MIT switched from Scheme to Python (2009) (www.wisdomandwonder.com)
- Women dating safety app 'Tea' breached, users' IDs posted to 4chan (www.404media.co)
- Google's shortened goo.gl links will stop working next month (www.theverge.com)
- Programming vehicles in games (wassimulator.com)
- It's a DE9, not a DB9 (but we know what you mean) (news.sparkfun.com)
- The future is not self-hosted (www.drewlyton.com)
- Who has the fastest F1 website (2021) (jakearchibald.com)
- Show HN: Price Per Token – LLM API Pricing Data (pricepertoken.com)
- Games Look Bad: HDR and Tone Mapping (2017) (ventspace.wordpress.com)
- Quantitative AI progress needs accurate and transparent evaluation (mathstodon.xyz)
- Google spoofed via DKIM replay attack: A technical breakdown (easydmarc.com)
GitHub Trending(13)
- QwenLM / Qwen3-Coder
Qwen3-Coder is the code version of Qwen3, the large language model series developed by Qwen team, Alibaba Cloud.
- m1k1o / neko
A self hosted virtual browser that runs in docker and uses WebRTC.
- juspay / hyperswitch
An open source payments switch written in Rust to make payments fast, reliable and affordable
- semgrep / semgrep
Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.
- OpenBB-finance / OpenBB
Investment Research for Everyone, Everywhere.
- frappe / hrms
Open Source HR and Payroll Software
- tensorzero / tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation.
- software-mansion / react-native-reanimated
React Native's Animated library reimplemented
- steven2358 / awesome-generative-ai
A curated list of modern Generative Artificial Intelligence projects and services
- srbhr / Resume-Matcher
Improve your resumes with Resume Matcher. Get insights, keyword suggestions and tune your resumes to job descriptions.
- aaPanel / BillionMail
BillionMail gives you open-source MailServer, NewsLetter, Email Marketing — fully self-hosted, dev-friendly, and free from monthly fees. Join the discord: https://discord.gg/asfXzBUhZr
- moby / moby
The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
- twbs / bootstrap
The most popular HTML, CSS, and JavaScript framework for developing responsive, mobile first projects on the web.
Product Hunt(14)
- Memories.ai
ChatGPT for your video library, with unlimited video context
- Tender
Like tinder, but only photos of my wife & only right swipes
- Opal
Describe, create, and share your AI mini-apps
- Golex AI
Turn ideas into interactive demo websites
- Parsagon
AI to track and analyze public policy
- camelAI Embedded
Embed chat with your data in your product
- DocsBot
AI agents built for seamless support
- Frigade AI
In-app support that adapts to your product automatically
- Create tab in Google Photos
Bring your memories to life
- Norton™ Neo
Your first safe AI-native browser
- Fluensa
Make the entire internet a space for language learning
- Seed LiveInterpret 2.0
The SOTA performance of simultaneous interpretation models
- MeetHub
Book meeting rooms from Slack
- Gabriel Operator
Turns browser actions into AI agents to automate tasks
Hugging Face(15)
- nablaNABLA: Neighborhood Adaptive Block-Level Attention
Recent progress in transformer-based architectures has demonstrated remarkable success in video generation tasks. However, the quadratic complexity of full attention mechanisms remains a critical bottleneck, particularly for high-resolution and long-duration video sequences. In this paper, we propose NABLA, a novel Neighborhood Adaptive Block-Level Attention mechanism that dynamically adapts to sparsity patterns in video diffusion transformers (DiTs). By leveraging block-wise attention with adaptive sparsity-driven threshold, NABLA reduces computational overhead while preserving generative quality. Our method does not require custom low-level operator design and can be seamlessly integrated with PyTorch's Flex Attention operator. Experiments demonstrate that NABLA achieves up to 2.7x faster training and inference compared to baseline almost without compromising quantitative metrics (CLIP score, VBench score, human evaluation score) and visual quality drop. The code and model weights are available here: https://github.com/gen-ai-team/Wan2.1-NABLA
- Group Sequence Policy Optimization
This paper introduces Group Sequence Policy Optimization (GSPO), our stable, efficient, and performant reinforcement learning algorithm for training large language models. Unlike previous algorithms that adopt token-level importance ratios, GSPO defines the importance ratio based on sequence likelihood and performs sequence-level clipping, rewarding, and optimization. We demonstrate that GSPO achieves superior training efficiency and performance compared to the GRPO algorithm, notably stabilizes Mixture-of-Experts (MoE) RL training, and has the potential for simplifying the design of RL infrastructure. These merits of GSPO have contributed to the remarkable improvements in the latest Qwen3 models.
- MUR: Momentum Uncertainty guided Reasoning for Large Language Models
Large Language Models (LLMs) have achieved impressive performance on reasoning-intensive tasks, yet optimizing their reasoning efficiency remains an open challenge. While Test-Time Scaling (TTS) improves reasoning quality, it often leads to overthinking, wasting tokens on redundant computations. This work investigates how to efficiently and adaptively guide LLM test-time scaling without additional training. Inspired by the concept of momentum in physics, we propose Momentum Uncertainty-guided Reasoning (MUR), which dynamically allocates thinking budgets to critical reasoning steps by tracking and aggregating stepwise uncertainty over time. To support flexible inference-time control, we introduce gamma-control, a simple mechanism that tunes the reasoning budget via a single hyperparameter. We provide in-depth theoretical proof to support the superiority of MUR in terms of stability and biases. MUR is comprehensively evaluated against various TTS methods across four challenging benchmarks (MATH-500, AIME24, AIME25, and GPQA-diamond) using different sizes of recent Qwen3 models (1.7B, 4B, and 8B). Results demonstrate that MUR reduces computation by over 50% on average while improving accuracy by 0.62-3.37%.
- LAPO: Internalizing Reasoning Efficiency via Length-Adaptive Policy Optimization
Large reasoning models have achieved remarkable performance through extended chain-of-thought sequences, yet this computational freedom leads to excessive token generation even for simple problems. We present Length-Adaptive Policy Optimization (LAPO), a novel framework that transforms reasoning length control from an external constraint into an intrinsic model capability. Unlike existing approaches that impose rigid limits or rely on post-hoc interventions, LAPO enables models to internalize an understanding of appropriate reasoning depth through a two-stage reinforcement learning process. In the first stage, models learn natural reasoning patterns by discovering the statistical distribution of successful solution lengths. The second stage leverages these patterns as meta-cognitive guidance, embedding them directly within the model's reasoning context to ensure inference-time flexibility. Experiments on mathematical reasoning benchmarks demonstrate that LAPO reduces token usage by up to 40.9\% while improving accuracy by 2.3\%. Our analysis reveals that models trained with LAPO develop emergent abilities to allocate computational resources based on problem complexity, achieving efficient reasoning without sacrificing quality.
- Captain Cinema: Towards Short Movie Generation
We present Captain Cinema, a generation framework for short movie generation. Given a detailed textual description of a movie storyline, our approach firstly generates a sequence of keyframes that outline the entire narrative, which ensures long-range coherence in both the storyline and visual appearance (e.g., scenes and characters). We refer to this step as top-down keyframe planning. These keyframes then serve as conditioning signals for a video synthesis model, which supports long context learning, to produce the spatio-temporal dynamics between them. This step is referred to as bottom-up video synthesis. To support stable and efficient generation of multi-scene long narrative cinematic works, we introduce an interleaved training strategy for Multimodal Diffusion Transformers (MM-DiT), specifically adapted for long-context video data. Our model is trained on a specially curated cinematic dataset consisting of interleaved data pairs. Our experiments demonstrate that Captain Cinema performs favorably in the automated creation of visually coherent and narrative consistent short movies in high quality and efficiency. Project page: https://thecinema.ai
- Hierarchical Budget Policy Optimization for Adaptive Reasoning
Large reasoning models achieve remarkable performance through extensive chain-of-thought generation, yet exhibit significant computational inefficiency by applying uniform reasoning strategies regardless of problem complexity. We present Hierarchical Budget Policy Optimization (HBPO), a reinforcement learning framework that enables models to learn problem-specific reasoning depths without sacrificing capability. HBPO addresses the fundamental challenge of exploration space collapse in efficiency-oriented training, where penalties on long output length systematically bias models away from necessary long reasoning paths. Through hierarchical budget exploration, our approach partitions rollout samples into multiple subgroups with distinct token budgets, aiming to enable efficient resource allocation while preventing degradation of capability. We introduce differentiated reward mechanisms that create budget-aware incentives aligned with the complexity of the problem, allowing models to discover natural correspondences between task requirements and computational effort. Extensive experiments demonstrate that HBPO reduces average token usage by up to 60.6% while improving accuracy by 3.14% across four reasoning benchmarks. Unlike existing methods that impose external constraints or rely on discrete mode selection, HBPO exhibits emergent adaptive behavior where models automatically adjust reasoning depth based on problem complexity. Our results suggest that reasoning efficiency and capability are not inherently conflicting, and can be simultaneously optimized through appropriately structured hierarchical training that preserves exploration diversity.
- TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation
Scaling visual generation models is essential for real-world content creation, yet requires substantial training and computational expenses. Alternatively, test-time scaling has garnered growing attention due to resource efficiency and promising performance. In this work, we present TTS-VAR, the first general test-time scaling framework for visual auto-regressive (VAR) models, modeling the generation process as a path searching problem. To dynamically balance computational efficiency with exploration capacity, we first introduce an adaptive descending batch size schedule throughout the causal generation process. Besides, inspired by VAR's hierarchical coarse-to-fine multi-scale generation, our framework integrates two key components: (i) At coarse scales, we observe that generated tokens are hard for evaluation, possibly leading to erroneous acceptance of inferior samples or rejection of superior samples. Noticing that the coarse scales contain sufficient structural information, we propose clustering-based diversity search. It preserves structural variety through semantic feature clustering, enabling later selection on samples with higher potential. (ii) In fine scales, resampling-based potential selection prioritizes promising candidates using potential scores, which are defined as reward functions incorporating multi-scale generation history. Experiments on the powerful VAR model Infinity show a notable 8.7% GenEval score improvement (from 0.69 to 0.75). Key insights reveal that early-stage structural features effectively influence final quality, and resampling efficacy varies across generation scales. Code is available at https://github.com/ali-vilab/TTS-VAR.
- EarthCrafter: Scalable 3D Earth Generation via Dual-Sparse Latent Diffusion
Despite the remarkable developments achieved by recent 3D generation works, scaling these methods to geographic extents, such as modeling thousands of square kilometers of Earth's surface, remains an open challenge. We address this through a dual innovation in data infrastructure and model architecture. First, we introduce Aerial-Earth3D, the largest 3D aerial dataset to date, consisting of 50k curated scenes (each measuring 600m x 600m) captured across the U.S. mainland, comprising 45M multi-view Google Earth frames. Each scene provides pose-annotated multi-view images, depth maps, normals, semantic segmentation, and camera poses, with explicit quality control to ensure terrain diversity. Building on this foundation, we propose EarthCrafter, a tailored framework for large-scale 3D Earth generation via sparse-decoupled latent diffusion. Our architecture separates structural and textural generation: 1) Dual sparse 3D-VAEs compress high-resolution geometric voxels and textural 2D Gaussian Splats (2DGS) into compact latent spaces, largely alleviating the costly computation suffering from vast geographic scales while preserving critical information. 2) We propose condition-aware flow matching models trained on mixed inputs (semantics, images, or neither) to flexibly model latent geometry and texture features independently. Extensive experiments demonstrate that EarthCrafter performs substantially better in extremely large-scale generation. The framework further supports versatile applications, from semantic-guided urban layout generation to unconditional terrain synthesis, while maintaining geographic plausibility through our rich data priors from Aerial-Earth3D. Our project page is available at https://whiteinblue.github.io/earthcrafter/
- DriftMoE: A Mixture of Experts Approach to Handle Concept Drifts
Learning from non-stationary data streams subject to concept drift requires models that can adapt on-the-fly while remaining resource-efficient. Existing adaptive ensemble methods often rely on coarse-grained adaptation mechanisms or simple voting schemes that fail to optimally leverage specialized knowledge. This paper introduces DriftMoE, an online Mixture-of-Experts (MoE) architecture that addresses these limitations through a novel co-training framework. DriftMoE features a compact neural router that is co-trained alongside a pool of incremental Hoeffding tree experts. The key innovation lies in a symbiotic learning loop that enables expert specialization: the router selects the most suitable expert for prediction, the relevant experts update incrementally with the true label, and the router refines its parameters using a multi-hot correctness mask that reinforces every accurate expert. This feedback loop provides the router with a clear training signal while accelerating expert specialization. We evaluate DriftMoE's performance across nine state-of-the-art data stream learning benchmarks spanning abrupt, gradual, and real-world drifts testing two distinct configurations: one where experts specialize on data regimes (multi-class variant), and another where they focus on single-class specialization (task-based variant). Our results demonstrate that DriftMoE achieves competitive results with state-of-the-art stream learning adaptive ensembles, offering a principled and efficient approach to concept drift adaptation. All code, data pipelines, and reproducibility scripts are available in our public GitHub repository: https://github.com/miguel-ceadar/drift-moe.
- DMOSpeech 2: Reinforcement Learning for Duration Prediction in Metric-Optimized Speech Synthesis
Diffusion-based text-to-speech (TTS) systems have made remarkable progress in zero-shot speech synthesis, yet optimizing all components for perceptual metrics remains challenging. Prior work with DMOSpeech demonstrated direct metric optimization for speech generation components, but duration prediction remained unoptimized. This paper presents DMOSpeech 2, which extends metric optimization to the duration predictor through a reinforcement learning approach. The proposed system implements a novel duration policy framework using group relative preference optimization (GRPO) with speaker similarity and word error rate as reward signals. By optimizing this previously unoptimized component, DMOSpeech 2 creates a more complete metric-optimized synthesis pipeline. Additionally, this paper introduces teacher-guided sampling, a hybrid approach leveraging a teacher model for initial denoising steps before transitioning to the student model, significantly improving output diversity while maintaining efficiency. Comprehensive evaluations demonstrate superior performance across all metrics compared to previous systems, while reducing sampling steps by half without quality degradation. These advances represent a significant step toward speech synthesis systems with metric optimization across multiple components. The audio samples, code and pre-trained models are available at https://dmospeech2.github.io/.
- Technical Report of TeleChat2, TeleChat2.5 and T1
We introduce the latest series of TeleChat models: TeleChat2, TeleChat2.5, and T1, offering a significant upgrade over their predecessor, TeleChat. Despite minimal changes to the model architecture, the new series achieves substantial performance gains through enhanced training strategies in both pre-training and post-training stages. The series begins with TeleChat2, which undergoes pretraining on 10 trillion high-quality and diverse tokens. This is followed by Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) to further enhance its capabilities. TeleChat2.5 and T1 expand the pipeline by incorporating a continual pretraining phase with domain-specific datasets, combined with reinforcement learning (RL) to improve performance in code generation and mathematical reasoning tasks. The T1 variant is designed for complex reasoning, supporting long Chain-of-Thought (CoT) reasoning and demonstrating substantial improvements in mathematics and coding. In contrast, TeleChat2.5 prioritizes speed, delivering rapid inference. Both flagship models of T1 and TeleChat2.5 are dense Transformer-based architectures with 115B parameters, showcasing significant advancements in reasoning and general task performance compared to the original TeleChat. Notably, T1-115B outperform proprietary models such as OpenAI's o1-mini and GPT-4o. We publicly release TeleChat2, TeleChat2.5 and T1, including post-trained versions with 35B and 115B parameters, to empower developers and researchers with state-of-the-art language models tailored for diverse applications.
- A New Pair of GloVes
This report documents, describes, and evaluates new 2024 English GloVe (Global Vectors for Word Representation) models. While the original GloVe models built in 2014 have been widely used and found useful, languages and the world continue to evolve and we thought that current usage could benefit from updated models. Moreover, the 2014 models were not carefully documented as to the exact data versions and preprocessing that were used, and we rectify this by documenting these new models. We trained two sets of word embeddings using Wikipedia, Gigaword, and a subset of Dolma. Evaluation through vocabulary comparison, direct testing, and NER tasks shows that the 2024 vectors incorporate new culturally and linguistically relevant words, perform comparably on structural tasks like analogy and similarity, and demonstrate improved performance on recent, temporally dependent NER datasets such as non-Western newswire data.
- GLiNER2: An Efficient Multi-Task Information Extraction System with Schema-Driven Interface
Information extraction (IE) is fundamental to numerous NLP applications, yet existing solutions often require specialized models for different tasks or rely on computationally expensive large language models. We present GLiNER2, a unified framework that enhances the original GLiNER architecture to support named entity recognition, text classification, and hierarchical structured data extraction within a single efficient model. Built pretrained transformer encoder architecture, GLiNER2 maintains CPU efficiency and compact size while introducing multi-task composition through an intuitive schema-based interface. Our experiments demonstrate competitive performance across extraction and classification tasks with substantial improvements in deployment accessibility compared to LLM-based alternatives. We release GLiNER2 as an open-source pip-installable library with pre-trained models and documentation at https://github.com/fastino-ai/GLiNER2.
- SegDT: A Diffusion Transformer-Based Segmentation Model for Medical Imaging
Medical image segmentation is crucial for many healthcare tasks, including disease diagnosis and treatment planning. One key area is the segmentation of skin lesions, which is vital for diagnosing skin cancer and monitoring patients. In this context, this paper introduces SegDT, a new segmentation model based on diffusion transformer (DiT). SegDT is designed to work on low-cost hardware and incorporates Rectified Flow, which improves the generation quality at reduced inference steps and maintains the flexibility of standard diffusion models. Our method is evaluated on three benchmarking datasets and compared against several existing works, achieving state-of-the-art results while maintaining fast inference speeds. This makes the proposed model appealing for real-world medical applications. This work advances the performance and capabilities of deep learning models in medical image analysis, enabling faster, more accurate diagnostic tools for healthcare professionals. The code is made publicly available at https://github.com/Bekhouche/SegDT{GitHub}.
- Discovering and using Spelke segments
Segments in computer vision are often defined by semantic considerations and are highly dependent on category-specific conventions. In contrast, developmental psychology suggests that humans perceive the world in terms of Spelke objects--groupings of physical things that reliably move together when acted on by physical forces. Spelke objects thus operate on category-agnostic causal motion relationships which potentially better support tasks like manipulation and planning. In this paper, we first benchmark the Spelke object concept, introducing the SpelkeBench dataset that contains a wide variety of well-defined Spelke segments in natural images. Next, to extract Spelke segments from images algorithmically, we build SpelkeNet, a class of visual world models trained to predict distributions over future motions. SpelkeNet supports estimation of two key concepts for Spelke object discovery: (1) the motion affordance map, identifying regions likely to move under a poke, and (2) the expected-displacement map, capturing how the rest of the scene will move. These concepts are used for "statistical counterfactual probing", where diverse "virtual pokes" are applied on regions of high motion-affordance, and the resultant expected displacement maps are used define Spelke segments as statistical aggregates of correlated motion statistics. We find that SpelkeNet outperforms supervised baselines like SegmentAnything (SAM) on SpelkeBench. Finally, we show that the Spelke concept is practically useful for downstream applications, yielding superior performance on the 3DEditBench benchmark for physical object manipulation when used in a variety of off-the-shelf object manipulation models.
Solidot(15)
- 微软 CEO 轻淡的回应公司裁员之谜
微软 CEO 纳德拉(Satya Nadella)周四在给员工的备忘录中简单回应了大规模裁员引发员工担忧一事。微软在股价创新高利润创纪录向 AI 大规模投资的同时进行了大规模裁员,今年至今裁员逾 1.5 万人,其中 7 月初裁掉了 9 千人。为什么公司状况如此之好却仍然进行裁员?纳德拉使用了标准的模糊语言,没有正面回应,只是说裁员给员工带来了压力,但是进步不是沿着一条直线,是动态的,有时候不协调,但总是苛刻的,我们还是来谈谈使命吧。这是高管或政客在不愿意正面回应时使用的话术,没什么意义,他不愿意告诉员工他的薪酬是与利润和股价挂钩,而不是与员工的忠诚挂钩,裁员能让投资者或股东满意,对投资者有利,对他也有利。
- Mistral AI 环境报告证实 AI 是一个饥渴的怪物
为提高透明度,法国 AI 公司 Mistral AI 与 Carbone 4 和生态转型机构 ADEME 合作发布了其大模型 Mistral Large 2 的环境报告,证实 AI 是一个饥渴的怪物。Mistral Large 2 大模型的推理过程占到了温室气体排放的 85.5% 和水消耗的 91%;Mistral Large 2 有 1230 亿个参数,模型训练产生了约 2 万吨二氧化碳当量,消耗了 28.1 万立方米水,相当于约 112 个奥运会标准游泳池的蓄水量;为了产生 400 个 token 的响应,模型消耗了约 45 毫升水,产生了约 1.14 克二氧化碳当量。Mistral 称测试显示,大模型的环境影响与参数规模成正比,生成相同数量的 token,一个参数规模大十倍的模型的环境影响比较小的模型大一个量级。
- 特朗普威胁关闭 TikTok
由于谈判不顺利,在三次给予 TikTok 宽限期之后美国政府官员威胁关闭 TikTok。TikTok 美国业务出售或被禁止服务的禁令原本于 2025 年 1 月 19 日生效,特朗普于 1 月 20 日上任之后就给了它 75 天宽限期。该宽限期于 4 月 5 日截至,但将 TikTok 美国业务出售给美国公司的谈判仍然在进行之中,特朗普之后第二次给了 75 天的宽限期。在 6 月 19 日第二次宽限期即将结束之际,特朗普再次延长 90 天。这位最早在 2020 年威胁要关闭 TikTok 的美国总统表示只有他能达成交易让它能在美国继续运营。然而与中国方面的谈判并不顺利,商务部长 Howard Lutnick 表示如果中国方面不批准交易,美国政府愿意关闭 TikTok。谈判的焦灼点是美国要求字节跳动出售 TikTok 使用的推荐算法。
- Debian 13.0 Trixie 的新变化
代号为 Trixie 的 Debian 13.0 将于 8 月 9 日释出,新版本将有哪些变化?apt 更新到 v3.0,支持以不同颜色区分更新或下载;systemd 升级到 257.7-1;Kernel 为 6.12 LTS 版;Prometheus server 更新到 v2.53,OpenSSH 更新到 v10.0p1-5;此外还有大量软件包更新。
- GPD 推出配备 Ryzen AI Max+ 395 的掌机
深圳中软赢科准备在 Chinajoy 2025 上展示其最高端的掌机:使用 AMD Ryzen AI Max+ 395 APU 的 GPD WIN 5。AMD Ryzen AI Max 395 此前主要用于工作站,由基于 Zen5 架构的 16 核 32 线 CPU 和 Radeon 8060S GPU 组成,华硕和惠普推出的 Ryzen AI Max 395 笔记本电脑售价在 1.5-2 万元之间,很难想象如此强大的 APU 会用于掌机,也很难想象掌机的电池续航时间会有多久。
- 日本将允许用 iPS 细胞制造人类受精卵
日本内阁府的生命伦理专门调查会达成基本共识,允许使用人类 iPS 细胞制作受精卵(胚胎)。培养期限定为 14 天以内。在遵守一定规则的前提下允许进行相关研究,这将有助于查明不孕症及遗传性疾病的病因。使用受精卵的传统研究,大多使用在不孕治疗过程中获得的受精卵。如果能够使用 iPS 细胞等轻松获取受精卵,繁殖相关研究有可能取得进展。在使用 iPS 细胞来调查“繁殖”机制的研究领域,日本一直走在世界前列。已经在小鼠实验中成功使用 iPS 细胞制作卵子和精子,并使其受精后培育出后代。在人类身上实现这一点也只是时间问题,因此需要加紧建立相关规则。京都大学教授斋藤通纪的研究团队于 2011~2012 年首次成功利用小鼠的 iPS 细胞制作出卵子和精子,并使其受精后培育出后代。
- 英特尔今年将裁员 2.4 万人
作为 CEO 陈立武(Lip-Bu Tan)全面重组计划的一部分,英特尔宣布在 2025 年内裁员约 2.4 万人,取消或者缩减位于德国、波兰、哥斯达黎加和俄亥俄州的项目规模。截至 2024 年底,英特尔员工总数为 10.98 万名,其中 9.95 万是“核心员工”。芯片巨人表示,它计划到 2025 年底核心员工总数为 7.5 万。这意味着今年内裁员 2.4 万人,占到了员工总数的四分之一。陈立武在财报电话会议上表示,新工厂过度投资,他不认同只要建了工厂客户就会来的观点。英特尔取消了在德国和波兰投资数百亿美元建造晶圆厂、组装和测试设施的计划。哥斯达黎加的组装和测试业务将整合到越南的工厂,而逾 3400 名员工中的 2000 多人将会继续留在工程和企业部门。
- 2023 年的海洋热浪史无前例
发表在《科学》期刊上的研究显示,2023 年全球海洋热浪的强度、持续时间和规模都达到了前所未有的程度。2023 年,全球多个区域(包括北大西洋、热带太平洋、南太平洋和北太平洋)都经历了极端海洋热浪。研究人员利用卫星观测和海洋再分析数据进行了全性球分析。研究结果显示,2023 年的海洋热浪在强度、持续时间和地理范围上均创下了新的纪录:其持续时间是历史平均水平的四倍,其覆盖面为全球 96% 的海洋表面。从区域面积来看,最强的升温区域发生于北大西洋、热带东太平洋、北太平洋和西南太平洋,这些区域合计占海洋异常高温面积中的 9 成。北大西洋的海洋热浪早在 2022 年中期就已经开始并持续了 525 天;而西南太平洋的海洋热浪事件则以其极大的空间范围和超长的持续时间打破了先前的记录。在热带东太平洋厄尔尼诺现象开始期间,异常高温的峰值达到了 1.63 摄氏度。研究人员认为,2023年的海洋热浪可能标志着海洋-大气动力学的根本性转变,它可能是地球气候系统即将达到临界点的一个早期预警。
- CERN 演示反物质量子比特
欧洲核子研究中心(CERN)的 BAS 合作组首次让一个反质子在量子“自旋上”与“自旋下”状态之间持续稳定地振荡了近一分钟。这标志着首个反物质量子比特的诞生,是反物质研究领域取得的一次重大突破,为更精准地比较物质与反物质的行为差异开辟了新路径。反质子是质子的反物质对应粒子,质量相同但电荷相反。它们就像微小的条形磁铁,可以因量子自旋的不同朝向“上”或“下”两个方向。科学家可利用“相干量子跃迁光谱”技术,测量这些所谓“磁矩”翻转的方式。这项技术不仅在量子传感和量子信息处理中具有重要作用,也为检验自然界基本法则提供了精密工具,特别是电荷-宇称-时间(CPT)对称性。这种对称性规定,物质与反物质在所有物理行为上应完全一致,然而科学家观察到的宇宙却几乎完全由物质构成,这与理论存在明显矛盾。
- 国际法院认为健康环境是人权
海牙国际法院发布咨询意见《Advisory Opinion on Obligations of States in respect of Climate Change》,裁定清洁、健康和可持续的环境是一项基本人权,各国未能保护地球免受气候变化影响,可能构成违反国际法的行为。国际法院院长岩泽雄司 (Yuji Iwasawa)表示,“温室气体排放毫无疑问是由人类活动造成的,并具有跨境效应”,带来深远后果,这些后果“凸显了气候变化带来的紧迫和生存威胁”。
- Telegram 上的偷拍群组
南方都市报报道了最近在社交媒体上引起关注的 Telegram 偷拍频道。被称为 MaskPark树洞论坛的频道成员均为中国男性,总规模超 10 万人。该频道下设至少 20 个细分群组,主题均与色情内容紧密相关,最大群组达 90 万人。论坛的内容包含在不同场合偷拍陌生女性,主动散布前女友、现女友、妻子、女儿甚至母亲的隐私照片,公开意淫女同事等。该论坛号召全体成员对着屏幕发泄胜生理欲望,群组聊天记录内容低俗露骨,被网友称为“中国版N号房”事件。律师表示,偷拍者与传播者可能触犯“非法使用窃听窃照专用器材罪”(刑法第284条),造成严重后果可判2年以下徒刑;公然传播行为主要构成“传播淫秽物品罪”,情节严重者同样面临2年以下徒刑。“如果靠此进行盈利牟利,则涉嫌‘传播淫秽物品牟利罪’,最高可判处到无期徒刑。”
- FDA 的 AI 工具被发现捏造研究
FDA 几周前宣布使用名为 Elsa 的 AI 工具去加快药品和医疗设备的审批速度。内部人士称 Elsa 可用于生成会议记录和摘要或创建电子邮件和公报模板,但它也会捏造不存在的研究——也就是所谓的“幻觉”。FDA 内部人士称,幻觉让 Elsa 变得不可靠,无法用于重要工作。一位工作人员说,任何你没有时间仔细核查的东西都是不可靠的,AI 会很自信的产生幻觉。另一名工作人员说,AI 本应该帮助节省时间,但我浪费了很多额外时间去检查虚假或歪曲的研究。工作人员表示目前 Elsa 无法帮助加快药品和医疗设备的审批,仍然需要科学家进行评估,以确定药品和医疗设备是否安全有效。
- 硅谷 AI 创业公司拥抱中国的 996 工作制
连线报道,硅谷 AI 创业公司正在拥抱中国引发争议的 996 工作制。所谓 996 指的是从早上 9 点工作到晚上 9点,每周 6 天,一周工作 72 小时,近两倍于标准的 8 小时工作制(每周五天共 40 小时)。996 工作制引发了现代奴隶制的批评,但 AI 创业公司为了彼此之间展开竞争以及为了与中国公司竞争,纷纷拥抱 996。经营着一家人力资源和招聘公司的 Adrian Kinnersley 对这一现象如此普遍表示惊讶,他说有好几个客户在面试前筛选应聘者的前提条件之一就是他们是否愿意接受 996 工作制。AI 创业公司 Rilla 称该公司 80 名员工几乎所有人都遵守 996 工作制。它的招聘广告明确指出每周工作时间需超过 70 小时,如果对工作时间安排不满意就不要来应聘。该公司提供早餐、午餐和晚餐,周六也不例外。AI 物流创业公司 Sotira CEO Amrita Bhasin 认为对公司高层来说,996 工作制基本上是强制性的,但强加于普通员工是不公平的。
- 图瓦卢逾八成国民寻求澳大利亚的气候移民签证
南太平洋岛国图瓦卢(Tuvalu)是全世界受气候威胁最严重的地区之一,由于海平面上升,科学家担心未来 80 年内图瓦卢将不再适合人类居住。它由九个环形珊瑚岛群组成,其中两个基本被海浪淹没。根据 2022 年的人口统计数据,图瓦卢总人口为 10643 人。它的邻国澳大利亚根据气候移民协议每年向其发放气候移民签证。澳大利亚透露寻求签证的人数高达 8750 人,占到该国总人口的 82%,但它今年只有 280 个签证名额。
- 索尼通过降低 PS5 性能应对全球气候变化
索尼正在测试 PlayStation 5 的省电(Power Saver)模式,官方博客解释称,该选项将允许游戏以更低的功耗运行。该功能目前处于 Beta 测试阶段,正式上线之后玩家将可以选择“省电”选项。启用省电模式后,支持的 PS5 游戏将会降低性能,从而降低 PS5 的功耗,不支持的游戏则性能和功耗都不会降低。对于何谓降低性能,索尼称 VR 模式将不可用,部分游戏功能可能会受限。