Weekly Digest — 2025-W46
125 unique stories (2025-11-10 → 2025-11-16), aggregated across 8 sources.
Hacker News(42)
- Vibe Code Warning – A personal casestudy (github.com)
- The lazy Git UI you didn't know you need (www.bwplotka.dev)
- Redmond, WA, turns off Flock Safety cameras after ICE arrests (www.seattletimes.com)
- Unexpected things that are people (bengoldhaber.substack.com)
- How cops can get your private online data (www.eff.org)
- Asus Ascent GX10 (www.asus.com)
- Collaboration sucks (newsletter.posthog.com)
- FFmpeg to Google: Fund us or stop sending bugs (thenewstack.io)
- Scaling HNSWs (antirez.com)
- The history of Casio watches (www.casio.com)
- iPod Socks (en.wikipedia.org)
- Firefox expands fingerprint protections (blog.mozilla.org)
GitHub Trending(12)
- google / adk-go
An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
- usestrix / strix
✨ Open-source AI hackers for your apps 👨🏻💻
- umami-software / umami
Umami is a modern, privacy-focused alternative to Google Analytics.
- TapXWorld / ChinaTextbook
所有小初高、大学PDF教材。
- thinking-machines-lab / tinker-cookbook
Post-training with Tinker
- iptv-org / iptv
Collection of publicly available IPTV channels from all over the world
- sansan0 / TrendRadar
🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/飞书/钉钉/Telegram/邮件/ntfy推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点
- bobeff / open-source-games
A list of open source games.
- serverless-dns / serverless-dns
The RethinkDNS resolver that deploys to Cloudflare Workers, Deno Deploy, Fastly, and Fly.io
- yeongpin / cursor-free-vip
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.
- nvm-sh / nvm
Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
- traefik / traefik
The Cloud Native Application Proxy
Hugging Face(30)
- Too Good to be Bad: On the Failure of LLMs to Role-Play Villains
Large Language Models (LLMs) are increasingly tasked with creative generation, including the simulation of fictional characters. However, their ability to portray non-prosocial, antagonistic personas remains largely unexamined. We hypothesize that the safety alignment of modern LLMs creates a fundamental conflict with the task of authentically role-playing morally ambiguous or villainous characters. To investigate this, we introduce the Moral RolePlay benchmark, a new dataset featuring a four-level moral alignment scale and a balanced test set for rigorous evaluation. We task state-of-the-art LLMs with role-playing characters from moral paragons to pure villains. Our large-scale evaluation reveals a consistent, monotonic decline in role-playing fidelity as character morality decreases. We find that models struggle most with traits directly antithetical to safety principles, such as ``Deceitful'' and ``Manipulative'', often substituting nuanced malevolence with superficial aggression. Furthermore, we demonstrate that general chatbot proficiency is a poor predictor of villain role-playing ability, with highly safety-aligned models performing particularly poorly. Our work provides the first systematic evidence of this critical limitation, highlighting a key tension between model safety and creative fidelity. Our benchmark and findings pave the way for developing more nuanced, context-aware alignment methods.
- DeepEyesV2: Toward Agentic Multimodal Model
Agentic multimodal models should not only comprehend text and images, but also actively invoke external tools, such as code execution environments and web search, and integrate these operations into reasoning. In this work, we introduce DeepEyesV2 and explore how to build an agentic multimodal model from the perspectives of data construction, training methods, and model evaluation. We observe that direct reinforcement learning alone fails to induce robust tool-use behavior. This phenomenon motivates a two-stage training pipeline: a cold-start stage to establish tool-use patterns, and reinforcement learning stage to further refine tool invocation. We curate a diverse, moderately challenging training dataset, specifically including examples where tool use is beneficial. We further introduce RealX-Bench, a comprehensive benchmark designed to evaluate real-world multimodal reasoning, which inherently requires the integration of multiple capabilities, including perception, search, and reasoning. We evaluate DeepEyesV2 on RealX-Bench and other representative benchmarks, demonstrating its effectiveness across real-world understanding, mathematical reasoning, and search-intensive tasks. Moreover, DeepEyesV2 exhibits task-adaptive tool invocation, tending to use image operations for perception tasks and numerical computations for reasoning tasks. Reinforcement learning further enables complex tool combinations and allows model to selectively invoke tools based on context. We hope our study can provide guidance for community in developing agentic multimodal models.
- Visual Spatial Tuning
Capturing spatial relationships from visual inputs is a cornerstone of human-like general intelligence. Several previous studies have tried to enhance the spatial awareness of Vision-Language Models (VLMs) by adding extra expert encoders, which brings extra overhead and usually harms general capabilities. To enhance the spatial ability in general architectures, we introduce Visual Spatial Tuning (VST), a comprehensive framework to cultivate VLMs with human-like visuospatial abilities, from spatial perception to reasoning. We first attempt to enhance spatial perception in VLMs by constructing a large-scale dataset termed VST-P, which comprises 4.1 million samples spanning 19 skills across single views, multiple images, and videos. Then, we present VST-R, a curated dataset with 135K samples that instruct models to reason in space. In particular, we adopt a progressive training pipeline: supervised fine-tuning to build foundational spatial knowledge, followed by reinforcement learning to further improve spatial reasoning abilities. Without the side-effect to general capabilities, the proposed VST consistently achieves state-of-the-art results on several spatial benchmarks, including 34.8% on MMSI-Bench and 61.2% on VSIBench. It turns out that the Vision-Language-Action models can be significantly enhanced with the proposed spatial tuning paradigm, paving the way for more physically grounded AI.
- VeriCoT: Neuro-symbolic Chain-of-Thought Validation via Logical Consistency Checks
LLMs can perform multi-step reasoning through Chain-of-Thought (CoT), but they cannot reliably verify their own logic. Even when they reach correct answers, the underlying reasoning may be flawed, undermining trust in high-stakes scenarios. To mitigate this issue, we introduce VeriCoT, a neuro-symbolic method that extracts and verifies formal logical arguments from CoT reasoning. VeriCoT formalizes each CoT reasoning step into first-order logic and identifies premises that ground the argument in source context, commonsense knowledge, or prior reasoning steps. The symbolic representation enables automated solvers to verify logical validity while the NL premises allow humans and systems to identify ungrounded or fallacious reasoning steps. Experiments on the ProofWriter, LegalBench, and BioASQ datasets show VeriCoT effectively identifies flawed reasoning, and serves as a strong predictor of final answer correctness. We also leverage VeriCoT's verification signal for (1) inference-time self-reflection, (2) supervised fine-tuning (SFT) on VeriCoT-distilled datasets and (3) preference fine-tuning (PFT) with direct preference optimization (DPO) using verification-based pairwise rewards, further improving reasoning validity and accuracy.
- Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings
In this work, we identify an inherent bias in prevailing LVLM architectures toward the language modality, largely resulting from the common practice of simply appending visual embeddings to the input text sequence. To address this, we propose a simple yet effective method that refines textual embeddings by integrating average-pooled visual features. Our approach demonstrably improves visual grounding and significantly reduces hallucinations on established benchmarks. While average pooling offers a straightforward, robust, and efficient means of incorporating visual information, we believe that more sophisticated fusion methods could further enhance visual grounding and cross-modal alignment. Given that the primary focus of this work is to highlight the modality imbalance and its impact on hallucinations -- and to show that refining textual embeddings with visual information mitigates this issue -- we leave exploration of advanced fusion strategies for future work.
- Dense Motion Captioning
Recent advances in 3D human motion and language integration have primarily focused on text-to-motion generation, leaving the task of motion understanding relatively unexplored. We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences. Current datasets fall short in providing detailed temporal annotations and predominantly consist of short sequences featuring few actions. To overcome these limitations, we present the Complex Motion Dataset (CompMo), the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries. Built through a carefully designed data generation pipeline, CompMo includes 60,000 motion sequences, each composed of multiple actions ranging from at least two to ten, accurately annotated with their temporal extents. We further present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions. Our experiments show that DEMO substantially outperforms existing methods on CompMo as well as on adapted benchmarks, establishing a robust baseline for future research in 3D motion understanding and captioning.
- HaluMem: Evaluating Hallucinations in Memory Systems of Agents
Memory systems are key components that enable AI systems such as LLMs and AI agents to achieve long-term learning and sustained interaction. However, during memory storage and retrieval, these systems frequently exhibit memory hallucinations, including fabrication, errors, conflicts, and omissions. Existing evaluations of memory hallucinations are primarily end-to-end question answering, which makes it difficult to localize the operational stage within the memory system where hallucinations arise. To address this, we introduce the Hallucination in Memory Benchmark (HaluMem), the first operation level hallucination evaluation benchmark tailored to memory systems. HaluMem defines three evaluation tasks (memory extraction, memory updating, and memory question answering) to comprehensively reveal hallucination behaviors across different operational stages of interaction. To support evaluation, we construct user-centric, multi-turn human-AI interaction datasets, HaluMem-Medium and HaluMem-Long. Both include about 15k memory points and 3.5k multi-type questions. The average dialogue length per user reaches 1.5k and 2.6k turns, with context lengths exceeding 1M tokens, enabling evaluation of hallucinations across different context scales and task complexities. Empirical studies based on HaluMem show that existing memory systems tend to generate and accumulate hallucinations during the extraction and updating stages, which subsequently propagate errors to the question answering stage. Future research should focus on developing interpretable and constrained memory operation mechanisms that systematically suppress hallucinations and improve memory reliability.
- IterResearch: Rethinking Long-Horizon Agents via Markovian State Reconstruction
Recent advances in deep-research agents have shown promise for autonomous knowledge construction through dynamic reasoning over external sources. However, existing approaches rely on a mono-contextual paradigm that accumulates all information in a single, expanding context window, leading to context suffocation and noise contamination that limit their effectiveness on long-horizon tasks. We introduce IterResearch, a novel iterative deep-research paradigm that reformulates long-horizon research as a Markov Decision Process with strategic workspace reconstruction. By maintaining an evolving report as memory and periodically synthesizing insights, our approach preserves consistent reasoning capacity across arbitrary exploration depths. We further develop Efficiency-Aware Policy Optimization (EAPO), a reinforcement learning framework that incentivizes efficient exploration through geometric reward discounting and enables stable distributed training via adaptive downsampling. Extensive experiments demonstrate that IterResearch achieves substantial improvements over existing open-source agents with average +14.5pp across six benchmarks and narrows the gap with frontier proprietary systems. Remarkably, our paradigm exhibits unprecedented interaction scaling, extending to 2048 interactions with dramatic performance gains (from 3.5\% to 42.5\%), and serves as an effective prompting strategy, improving frontier models by up to 19.2pp over ReAct on long-horizon tasks. These findings position IterResearch as a versatile solution for long-horizon reasoning, effective both as a trained agent and as a prompting paradigm for frontier models.
- DRIVE: Data Curation Best Practices for Reinforcement Learning with Verifiable Reward in Competitive Code Generation
Recent reasoning-first models (e.g., OpenAI o1, DeepSeek R1) have spurred a resurgence of interest in RLVR. Nevertheless, advances are dominated by mathematics (e.g., AIME), with competitive-programming code generation underexplored and data curation receiving less attention than RL algorithm design. We investigate how to construct RLVR datasets (i.e., RL prompts) and present practical training techniques that yield strong performance on competitive-programming code generation. Our pipeline begins with supervised fine-tuning (SFT) distilled from strong open-source models, augmented with general-purpose and reasoning-intensive data. RL then follows a two-stage process with executable, testcase-driven rewards: first, training on a large, uniformly distributed set of competitive-programming problems using Group Relative Policy Optimization (GRPO) with 8 rollouts per prompt and a relatively short response-generation window (e.g., 32k during SFT and 24k in this stage) to expand entropy and mitigate repetition and truncation; second, we perform Pre-GRPO: updating on a small, high-quality set of challenging problems with a large rollout budget (64 rollouts per prompt) under a hard-focus curriculum that continuously retains the most difficult instances throughout training. We implement our method on Qwen2.5-32B and evaluate on LeetCode and Codeforces weekly contests to avoid data leakage. The resulting model achieves state-of-the-art performance among models of similar scale and is comparable to leading systems such as DeepSeek v3.1 and Doubao-1.5-Thinking. We also examine scaling trends and observe strong RL scaling on an internal large-scale MoE model. Our study distills concise best practices for data curation, entropy expansion, and curriculum design in RLVR for competitive-programming code generation.
- The Station: An Open-World Environment for AI-Driven Discovery
We introduce the STATION, an open-world multi-agent environment that models a miniature scientific ecosystem. Leveraging their extended context windows, agents in the Station can engage in long scientific journeys that include reading papers from peers, formulating hypotheses, submitting code, performing analyses, and publishing results. Importantly, there is no centralized system coordinating their activities - agents are free to choose their own actions and develop their own narratives within the Station. Experiments demonstrate that AI agents in the Station achieve new state-of-the-art performance on a wide range of benchmarks, spanning from mathematics to computational biology to machine learning, notably surpassing AlphaEvolve in circle packing. A rich tapestry of narratives emerges as agents pursue independent research, interact with peers, and build upon a cumulative history. From these emergent narratives, novel methods arise organically, such as a new density-adaptive algorithm for scRNA-seq batch integration. The Station marks a first step towards autonomous scientific discovery driven by emergent behavior in an open-world environment, representing a new paradigm that moves beyond rigid optimization.
- MVU-Eval: Towards Multi-Video Understanding Evaluation for Multimodal LLMs
The advent of Multimodal Large Language Models (MLLMs) has expanded AI capabilities to visual modalities, yet existing evaluation benchmarks remain limited to single-video understanding, overlooking the critical need for multi-video understanding in real-world scenarios (e.g., sports analytics and autonomous driving). To address this significant gap, we introduce MVU-Eval, the first comprehensive benchmark for evaluating Multi-Video Understanding for MLLMs. Specifically, our MVU-Eval mainly assesses eight core competencies through 1,824 meticulously curated question-answer pairs spanning 4,959 videos from diverse domains, addressing both fundamental perception tasks and high-order reasoning tasks. These capabilities are rigorously aligned with real-world applications such as multi-sensor synthesis in autonomous systems and cross-angle sports analytics. Through extensive evaluation of state-of-the-art open-source and closed-source models, we reveal significant performance discrepancies and limitations in current MLLMs' ability to perform understanding across multiple videos. The benchmark will be made publicly available to foster future research.
- Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMs
Sparse Mixture-of-Experts (MoE) have been widely adopted in recent large language models since it can efficiently scale up the model capability without increasing the inference cost. However, evaluations on broad downstream tasks reveal a consistent suboptimality of the routers in existing MoE LLMs, which results in a severe performance gap (e.g., 10-20% in accuracy) to the optimal routing. In this paper, we show that aligning the manifold of routing weights with that of task embedding can effectively reduce the gap and improve MoE LLMs' generalization performance. Our method, "Routing Manifold Alignment (RoMA)", introduces an additional manifold regularization term in the post-training objective and only requires lightweight finetuning of routers (with other parameters frozen). Specifically, the regularization encourages the routing weights of each sample to be close to those of its successful neighbors (whose routing weights lead to correct answers) in a task embedding space. Consequently, samples targeting similar tasks will share similar expert choices across layers. Building such bindings between tasks and experts over different samples is essential to achieve better generalization. Moreover, RoMA demonstrates the advantage of unifying the task understanding (by embedding models) with solution generation (by MoE LLMs). In experiments, we finetune routers in OLMoE, DeepSeekMoE, and Qwen3-MoE using RoMA. Evaluations on diverse benchmarks and extensive comparisons with baselines show the substantial improvement brought by RoMA.
Solidot(41)
- 美国就业状况发生改变
收集全美学生信息的美国教育部学生信息中心数据显示,2025 年春季,教授配管工、木匠等技术的职业培训学校的入学人数同比增长 12%。远高于大学入学人数的增幅(4%)。这一趋势从数年前开始增强,背景是人们对于因 AI 而改变的未来存在担忧。调查公司 Conjointly 今年以 10~20 多岁的 Z 世代的父母为对象进行的调查显示,只有 16 %的人认为“拥有大学学位就能保证长期稳定的就业”,77% 的人指出选择“难以自动化的工作”非常重要。这种动向有其合理的理由。美国的失业率总体上稳定在 4.0~4.5% 区间,但如果仅限于大学毕业前后的“20~24岁”人群,失业率则从 2024 年 12 月的 7.5% 上升至 2025 年 8 月的 9.2%。
- AI 不是裁员的原因,巨额 AI 支出才是
美国公司在宣布大规模裁员时通常以 AI 为借口,但裁员的原因真的是 AI 吗?很多研究和数据给出了不同观点:MIT 媒体实验室的研究发现,95% 的生成式 AI 试点商业项目没有成功;Atlassian 的调查显示 96% 的企业没有看到 AI 显著改进了组织效率、创新或工作质量;另一项研究显示四成企业员工在工作中面临“AI 垃圾(AI slop)”问题,需要花大量时间处理该问题。一部分人认为企业大规模裁员是因为疫情期间招募了太多员工;还有部分人认为美国可能面临经济衰退。对于科技行业的大规模裁员,一个更可能的原因是巨额 AI 支出带来的财务压力,而这些支出暂时还看不到会给收入带来增长。亚马逊的资本支出从 2023 年的 540 亿美元增至 2024 年的 840 亿美元,2025 年预计将达到 1180 亿美元。Meta 正为其数据中心争取 270 亿美元的信贷;甲骨文为履行 AI 合同计划每年借款 250 亿美元。在 AI 能带来可持续收入前科技巨头需要削减成本。
- Python 基金会在放弃美政府的 150 万美元拨款后收到了大量捐款
上月底,Python 软件基金会(PSF)宣布坚守 DEI(多元化、平等及包容)价值观以及考虑到无法预测的财务风险,放弃了美国政府的 150 万美元拨款。此事备受关注而被广泛报道,当天基金会就收到了大约 300 笔捐款,第二天还有 Reddit 用户抱怨尝试捐款时遭遇超时。上周五,基金会执行董事 Deb Nicholson 披露他们至今收到了逾 15.7 万美元捐款,包括 295 名每年捐款 99 美元的新支持会员。虽然这些捐款尚不足以填补 150 万美元的缺口,但基金会表示意义重大,他们感受到了来自社区的强有力支持。
- 服用褪黑素可能有风险
美国心脏协会科学年会上发表的一项初步研究发现,相比未服用褪黑素补充剂的人,服用褪黑素一年或更长时间的慢性失眠患者更容易发生心力衰竭、因心力衰竭住院以及死亡。褪黑素是由松果体分泌的一种激素,负责调节人体的睡眠清醒周期。其水平在黑暗中自然升高,在白天下降。人工合成的褪黑素与天然激素的化学结构相同,被广泛用于治疗失眠和时差反应。在很多国家,褪黑素补充剂无需处方即可购买。研究人员强调,需要开展更多研究去充分了解褪黑素对心脏健康的影响,并确保其安全使用。
- KeePassXC 不会加入 AI 功能
开源密码管理器项目 KeePassXC 更新了有关使用生成式 AI 的政策,开发者强调 KeePassXC 不会提供任何 AI 功能,但会使用 GitHub Copilot 等 AI 工具去处理简单的任务,比如用 Copilot 起草简单 bug 修复和 UI 变化的 pull request。由于 AI 不擅长处理复杂任务,开发者表示会谨慎使用 Copilot,会通过标准的审核流程去发现 AI 可能产生的错误。
- 伊朗遭遇空前的旱情
伊朗尤其是首都德黑兰正遭遇空前的旱情,降雨量创历史新低,水库几乎干涸,官员呼吁民众节约用水,总统 Masoud Pezeshkian 警告如果旱情短期内无法缓解,德黑兰可能实行限水,而如果限水不起作用,可能不得不疏散德黑兰。气象官员表示未来 10 天内预计不会有降雨。Latian 水坝是德黑兰主要水源之一,目前水库蓄水量不足 10%。附近的 Karaj 水坝情况类似。Karaj 水坝负责人 Mohammad-Ali Moallem 表示,今年降雨量相比去年减少了 92%,水库蓄水量只剩下 8%,大部分是无法使用的“死水”。伊朗第二大城市马什哈德也面临类似的旱情。德黑兰、Karaj 和马什哈德共有逾 1600 万人口。
- 美国政府考虑禁售普联路由器
美国多个政府机构以国家安全风险提议禁售普联路由器(TP-Link)。总部位于美国加州的 TP-Link Systems 否认它对美国国家安全构成风险的指控,称它已经与总部位于中国的 TP-Link Technologies 完全切割,它在新加坡有分公司,在越南有生产基地,除芯片组外所有产品的研发、设计、生产和制造均自主完成。TP-Link Systems 发言人称,TP-Link 是一家美国公司,致力于为美国及其它市场提供高质量安全的产品。TP-Link 称它的竞争对手也从中国采购零部件,中国以及其它国家的 APT 组织也在利用思科和 Netgear 等竞争对手产品中的漏洞。
- 中国 CO2 排放量连续 18 个月持平或下降
分析显示中国 CO2 排放量连续 18 个月持平或下降。这可能意味着作为全球最大的排放国,中国提前实现了 CO2 排放量达到峰值的目标。今年第三季度太阳能和风能装机容量分别增长 46% 和 11%,意味着即使电力需求不断增长,中国能源行业的排放量也能保持平稳。今年前九个月,中国新增太阳能装机容量 240GW,新增风能装机容量 61GW,有望在 2025 年再次刷新可再生能源装机容量纪录。去年中国新增太阳能装机容量 333GW,超过世界其它地区总和。数据还显示,部分经济领域的排放量逆势增长:第三季度石油需求和交通运输行业排放量下降 5%,但塑料等化学品产量激增导致其它领域排放量增长 10%。
- 加拿大麻疹疫情持续了一年
疫苗接种帮助加拿大在 1998 年消灭了麻疹,然而由于针对麻疹、腮腺炎和风疹(MMR)疫苗的反疫苗运动导致接种率下降,北美洲地区再次爆发了麻疹疫情,当麻疹疫情在一个国家持续超过一年,它就失去了麻疹消除国的地位。本周一 泛美卫生组织(PAHO)宣布加拿大的麻疹疫情已经持续了一年,它不再是麻疹消除国。加拿大的麻疹大范围传播始于 2024 年 10 月。截至 2025 年 11 月 1 日,加拿大今年至今统计了 5162 例麻疹病例。加拿大并非唯一一个面临麻疹疫情的国家。美国和墨西哥正经历类似的疫情爆发。美国自年初以来报告了至少 1618 例麻疹病例,墨西哥至少有5185 例。泛美卫生组织报告称,截至 11 月 7 日共收集了 10 个国家的 12593 例确诊麻疹病例报告,其中 95% 发生在加拿大、墨西哥和美国。这一数字比 2024 年增加了 30 倍,已导致至少 28 人死亡:墨西哥 23 人,美国 3 人,加拿大 2 人。美国现任卫生部长就是一位反疫苗者。
- Apple TV 不会推出基于广告的订阅服务,不会收购华纳
负责 Apple TV 业务的 Apple Services 高级副总裁 Eddy Cue 在接受采访时表示苹果不会推出基于广告的订阅服务,至少目前没有计划,但“不会说永远不会”,如果能保持价格上相对于竞争对手服务的优势,对消费者而言避免广告更好。主要流媒体服务如 Netflix 的无广告版起步价为每月 18 美元,而迪士尼的 Disney+ 是 19 美元,Apple TV 只有 13 美元。Apple TV 目前并不盈利,Eddy Cue 没有披露订阅总数,只是称 Apple TV 增长更快,去年的观看时长比以往任何时候都高。增加订阅人数的一个简单方法是购买现有的流媒体服务和内容制作商,Warner Bros. Discovery 正在寻求出售,该公司旗下的一大订阅服务是 HBO Max。Eddy Cue 对此表示,苹果很少进行大规模收购,通常只进行小规模的收购,他不认为苹果会购买华纳公司或购买任何公司的内容授权。
- 被 HR 支配的世界
经济学人报道,2024 年美国企业雇佣了 130 万 HR 员工,比十年前增长了64%。同期美国整体就业人数增长了14%。专业服务和科技公司自 2014 年以来雇佣的 HR 员工人数翻了一番。澳大利亚、英国和德国也有类似的趋势。首席人力资源官的薪酬也出现大幅增长。他们的总薪酬从占董事平均薪酬的 40% 增长至 2022 年的 70%。通用汽车首席执行官 Mary Barra 曾担任过公司的首席人力资源官。HR 员工大幅增长的趋势可能与工作环境的一系列变化相关,包括 Me Too 运动、疫情期间的远程办公,多元化倡议,企业面临更多与员工关系的监管,歧视或骚扰等职场投诉的大幅增长。歧视或骚扰指控的平均数量从 2021 年的每 1000 名员工 6 起上升到 2024 年的 15 起。
- 商业间谍软件利用三星手机漏洞攻击中东用户
安全公司 Palo Alto Networks 披露了专门利用三星 Galaxy 手机 0day 的商业间谍软件 Landfall。Landfall 最早出现于 2024 年 7 月,所利用的漏洞编号为 CVE-2025-21042。三星于 2025 年 4 月发布了针对该漏洞的补丁,而攻击的细节直到现在才予以披露。这次攻击主要针对中东地区的特定人群,因此大部分 Galaxy 手机用户不太可能感染间谍软件。Landfall 利用的是一种零点击漏洞,入侵设备不需要用户操作。Landfall 的攻击方法是在修改过的 DNG 图像文件中嵌入恶意 ZIP 包。CVE-2025-21042 漏洞源于手机的图像处理库。