DIGEST · 2025-11-10

OrangeBot.AI Digest — 2025-11-10

54 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Vibe Code Warning – A personal casestudy (github.com)
  2. The lazy Git UI you didn't know you need (www.bwplotka.dev)
  3. Redmond, WA, turns off Flock Safety cameras after ICE arrests (www.seattletimes.com)
  4. Unexpected things that are people (bengoldhaber.substack.com)
  5. How cops can get your private online data (www.eff.org)
  6. Asus Ascent GX10 (www.asus.com)
  7. Time to start de-Appling (heatherburns.tech)
  8. LLMs are steroids for your Dunning-Kruger (bytesauna.com)
  9. DNS Provider Quad9 Sees Piracy Blocking Orders as "Existential Threat" (torrentfreak.com)
  10. Europe to decide if 6 GHz is shared between Wi-Fi and cellular networks (www.theregister.com)
  11. Show HN: What Is Hacker News Working On? (waywo.eamag.me)
  12. Microsoft's lack of quality control is out of control (www.theregister.com)
  13. XSLT RIP (xslt.rip)
  14. Beets: The music geek’s media organizer (beets.io)
  15. Realtime BART Arrival Display (filbot.com)

GitHub Trending(14)

  1. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  2. usestrix / strix

    ✨ Open-source AI hackers for your apps 👨🏻‍💻

  3. umami-software / umami

    Umami is a modern, privacy-focused alternative to Google Analytics.

  4. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  5. thinking-machines-lab / tinker-cookbook

    Post-training with Tinker

  6. iptv-org / iptv

    Collection of publicly available IPTV channels from all over the world

  7. lzhoang2801 / OpCore-Simplify

    A tool designed to simplify the creation of OpenCore EFI

  8. YaLTeR / niri

    A scrollable-tiling Wayland compositor.

  9. bobeff / open-source-games

    A list of open source games.

  10. microsoft / call-center-ai

    Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!

  11. librespot-org / librespot

    Open Source Spotify client library

  12. opencloud-eu / opencloud

    🌤️This is the main repository of the OpenCloud server. It contains the golang codebase for the backend services.

  13. end-4 / dots-hyprland

    uhh questioning the meaning of dotfiles

  14. Zie619 / n8n-workflows

    all of the workflows of n8n i could find (also from the site itself)

Hugging Face(10)

  1. Too Good to be Bad: On the Failure of LLMs to Role-Play Villains

    Large Language Models (LLMs) are increasingly tasked with creative generation, including the simulation of fictional characters. However, their ability to portray non-prosocial, antagonistic personas remains largely unexamined. We hypothesize that the safety alignment of modern LLMs creates a fundamental conflict with the task of authentically role-playing morally ambiguous or villainous characters. To investigate this, we introduce the Moral RolePlay benchmark, a new dataset featuring a four-level moral alignment scale and a balanced test set for rigorous evaluation. We task state-of-the-art LLMs with role-playing characters from moral paragons to pure villains. Our large-scale evaluation reveals a consistent, monotonic decline in role-playing fidelity as character morality decreases. We find that models struggle most with traits directly antithetical to safety principles, such as ``Deceitful'' and ``Manipulative'', often substituting nuanced malevolence with superficial aggression. Furthermore, we demonstrate that general chatbot proficiency is a poor predictor of villain role-playing ability, with highly safety-aligned models performing particularly poorly. Our work provides the first systematic evidence of this critical limitation, highlighting a key tension between model safety and creative fidelity. Our benchmark and findings pave the way for developing more nuanced, context-aware alignment methods.

  2. DeepEyesV2: Toward Agentic Multimodal Model

    Agentic multimodal models should not only comprehend text and images, but also actively invoke external tools, such as code execution environments and web search, and integrate these operations into reasoning. In this work, we introduce DeepEyesV2 and explore how to build an agentic multimodal model from the perspectives of data construction, training methods, and model evaluation. We observe that direct reinforcement learning alone fails to induce robust tool-use behavior. This phenomenon motivates a two-stage training pipeline: a cold-start stage to establish tool-use patterns, and reinforcement learning stage to further refine tool invocation. We curate a diverse, moderately challenging training dataset, specifically including examples where tool use is beneficial. We further introduce RealX-Bench, a comprehensive benchmark designed to evaluate real-world multimodal reasoning, which inherently requires the integration of multiple capabilities, including perception, search, and reasoning. We evaluate DeepEyesV2 on RealX-Bench and other representative benchmarks, demonstrating its effectiveness across real-world understanding, mathematical reasoning, and search-intensive tasks. Moreover, DeepEyesV2 exhibits task-adaptive tool invocation, tending to use image operations for perception tasks and numerical computations for reasoning tasks. Reinforcement learning further enables complex tool combinations and allows model to selectively invoke tools based on context. We hope our study can provide guidance for community in developing agentic multimodal models.

  3. Visual Spatial Tuning

    Capturing spatial relationships from visual inputs is a cornerstone of human-like general intelligence. Several previous studies have tried to enhance the spatial awareness of Vision-Language Models (VLMs) by adding extra expert encoders, which brings extra overhead and usually harms general capabilities. To enhance the spatial ability in general architectures, we introduce Visual Spatial Tuning (VST), a comprehensive framework to cultivate VLMs with human-like visuospatial abilities, from spatial perception to reasoning. We first attempt to enhance spatial perception in VLMs by constructing a large-scale dataset termed VST-P, which comprises 4.1 million samples spanning 19 skills across single views, multiple images, and videos. Then, we present VST-R, a curated dataset with 135K samples that instruct models to reason in space. In particular, we adopt a progressive training pipeline: supervised fine-tuning to build foundational spatial knowledge, followed by reinforcement learning to further improve spatial reasoning abilities. Without the side-effect to general capabilities, the proposed VST consistently achieves state-of-the-art results on several spatial benchmarks, including 34.8% on MMSI-Bench and 61.2% on VSIBench. It turns out that the Vision-Language-Action models can be significantly enhanced with the proposed spatial tuning paradigm, paving the way for more physically grounded AI.

  4. VeriCoT: Neuro-symbolic Chain-of-Thought Validation via Logical Consistency Checks

    LLMs can perform multi-step reasoning through Chain-of-Thought (CoT), but they cannot reliably verify their own logic. Even when they reach correct answers, the underlying reasoning may be flawed, undermining trust in high-stakes scenarios. To mitigate this issue, we introduce VeriCoT, a neuro-symbolic method that extracts and verifies formal logical arguments from CoT reasoning. VeriCoT formalizes each CoT reasoning step into first-order logic and identifies premises that ground the argument in source context, commonsense knowledge, or prior reasoning steps. The symbolic representation enables automated solvers to verify logical validity while the NL premises allow humans and systems to identify ungrounded or fallacious reasoning steps. Experiments on the ProofWriter, LegalBench, and BioASQ datasets show VeriCoT effectively identifies flawed reasoning, and serves as a strong predictor of final answer correctness. We also leverage VeriCoT's verification signal for (1) inference-time self-reflection, (2) supervised fine-tuning (SFT) on VeriCoT-distilled datasets and (3) preference fine-tuning (PFT) with direct preference optimization (DPO) using verification-based pairwise rewards, further improving reasoning validity and accuracy.

  5. Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings

    In this work, we identify an inherent bias in prevailing LVLM architectures toward the language modality, largely resulting from the common practice of simply appending visual embeddings to the input text sequence. To address this, we propose a simple yet effective method that refines textual embeddings by integrating average-pooled visual features. Our approach demonstrably improves visual grounding and significantly reduces hallucinations on established benchmarks. While average pooling offers a straightforward, robust, and efficient means of incorporating visual information, we believe that more sophisticated fusion methods could further enhance visual grounding and cross-modal alignment. Given that the primary focus of this work is to highlight the modality imbalance and its impact on hallucinations -- and to show that refining textual embeddings with visual information mitigates this issue -- we leave exploration of advanced fusion strategies for future work.

  6. Dense Motion Captioning

    Recent advances in 3D human motion and language integration have primarily focused on text-to-motion generation, leaving the task of motion understanding relatively unexplored. We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences. Current datasets fall short in providing detailed temporal annotations and predominantly consist of short sequences featuring few actions. To overcome these limitations, we present the Complex Motion Dataset (CompMo), the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries. Built through a carefully designed data generation pipeline, CompMo includes 60,000 motion sequences, each composed of multiple actions ranging from at least two to ten, accurately annotated with their temporal extents. We further present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions. Our experiments show that DEMO substantially outperforms existing methods on CompMo as well as on adapted benchmarks, establishing a robust baseline for future research in 3D motion understanding and captioning.

  7. Real-Time Reasoning Agents in Evolving Environments

    Agents in the real world must make not only logical but also timely judgments. This requires continuous awareness of the dynamic environment: hazards emerge, opportunities arise, and other agents act, while the agent's reasoning is still unfolding. Despite advances in language model reasoning, existing approaches fail to account for this dynamic nature. We introduce real-time reasoning as a new problem formulation for agents in evolving environments and build Real-Time Reasoning Gym to demonstrate it. We study two paradigms for deploying language models in agents: (1) reactive agents, which employ language models with bounded reasoning computation for rapid responses, and (2) planning agents, which allow extended reasoning computation for complex problems. Our experiments show that even state-of-the-art models struggle with making logical and timely judgments in either paradigm. To address this limitation, we propose AgileThinker, which simultaneously engages both reasoning paradigms. AgileThinker consistently outperforms agents engaging only one reasoning paradigm as the task difficulty and time pressure rise, effectively balancing reasoning depth and response latency. Our work establishes real-time reasoning as a critical testbed for developing practical agents and provides a foundation for research in temporally constrained AI systems, highlighting a path toward real-time capable agents.

  8. HAFixAgent: History-Aware Automated Program Repair Agent

    Automated program repair (APR) has recently shifted toward large language models and agent-based systems, yet most systems rely on local snapshot context, overlooking repository history. Prior work shows that repository history helps repair single-line bugs, since the last commit touching the buggy line is often the bug-introducing one. In this paper, we investigate whether repository history can also improve agentic APR systems at scale, especially for complex multi-hunk bugs. We present HAFixAgent, a History-Aware Bug-Fixing Agent that injects blame-derived repository heuristics into its repair loop. A preliminary study of all 854 real-world bugs from Defects4J motivates our design, showing that bug-relevant history is both widely available and highly concentrated. Empirical comparison of HAFixAgent with two state-of-the-art baselines shows: (1) Effectiveness: HAFixAgent significantly improves over the agent-based baseline (by 212.3%) and the multi-hunk baseline (by 29.9%). (2) Efficiency: history does not significantly increase agent steps and keeps token costs comparable, with notably lower median costs for complex multi-file-multi-hunk bugs. (3) Practicality: combining different historical heuristics repairs more bugs, offering a clear cost-benefit trade-off. HAFixAgent offers a practical recipe for history-aware agentic APR: ground the agent in version control history, prioritize diff-based historical context, and integrate complementary heuristics when needed.

  9. CritiCal: Can Critique Help LLM Uncertainty or Confidence Calibration?

    Accurate confidence calibration in Large Language Models (LLMs) is critical for safe use in high-stakes domains, where clear verbalized confidence enhances user trust. Traditional methods that mimic reference confidence expressions often fail to capture the reasoning needed for accurate confidence assessment. We propose natural language critiques as a solution, ideally suited for confidence calibration, as precise gold confidence labels are hard to obtain and often require multiple generations. This paper studies how natural language critiques can enhance verbalized confidence, addressing: (1) What to critique: uncertainty (question-focused) or confidence (answer-specific)? Analysis shows confidence suits multiple-choice tasks, while uncertainty excels in open-ended scenarios. (2) How to critique: self-critique or critique calibration training? We propose Self-Critique, enabling LLMs to critique and optimize their confidence beyond mere accuracy, and CritiCal, a novel Critique Calibration training method that leverages natural language critiques to improve confidence calibration, moving beyond direct numerical optimization. Experiments show that CritiCal significantly outperforms Self-Critique and other competitive baselines, even surpassing its teacher model, GPT-4o, in complex reasoning tasks. CritiCal also shows robust generalization in out-of-distribution settings, advancing LLM's reliability.

  10. Jailbreaking in the Haystack

    Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our method is the observation that the position of harmful goals play an important role in safety. Experiments on standard safety benchmark, HarmBench, show that NINJA significantly increases attack success rates across state-of-the-art open and proprietary models, including LLaMA, Qwen, Mistral, and Gemini. Unlike prior jailbreaking methods, our approach is low-resource, transferable, and less detectable. Moreover, we show that NINJA is compute-optimal -- under a fixed compute budget, increasing context length can outperform increasing the number of trials in best-of-N jailbreak. These findings reveal that even benign long contexts -- when crafted with careful goal positioning -- introduce fundamental vulnerabilities in modern LMs.

Solidot(15)

  1. 美国就业状况发生改变

    收集全美学生信息的美国教育部学生信息中心数据显示,2025 年春季,教授配管工、木匠等技术的职业培训学校的入学人数同比增长 12%。远高于大学入学人数的增幅(4%)。这一趋势从数年前开始增强,背景是人们对于因 AI 而改变的未来存在担忧。调查公司 Conjointly 今年以 10~20 多岁的 Z 世代的父母为对象进行的调查显示,只有 16 %的人认为“拥有大学学位就能保证长期稳定的就业”,77% 的人指出选择“难以自动化的工作”非常重要。这种动向有其合理的理由。美国的失业率总体上稳定在 4.0~4.5% 区间,但如果仅限于大学毕业前后的“20~24岁”人群,失业率则从 2024 年 12 月的 7.5% 上升至 2025 年 8 月的 9.2%。

  2. AI 不是裁员的原因,巨额 AI 支出才是

    美国公司在宣布大规模裁员时通常以 AI 为借口,但裁员的原因真的是 AI 吗?很多研究和数据给出了不同观点:MIT 媒体实验室的研究发现,95% 的生成式 AI 试点商业项目没有成功;Atlassian 的调查显示 96% 的企业没有看到 AI 显著改进了组织效率、创新或工作质量;另一项研究显示四成企业员工在工作中面临“AI 垃圾(AI slop)”问题,需要花大量时间处理该问题。一部分人认为企业大规模裁员是因为疫情期间招募了太多员工;还有部分人认为美国可能面临经济衰退。对于科技行业的大规模裁员,一个更可能的原因是巨额 AI 支出带来的财务压力,而这些支出暂时还看不到会给收入带来增长。亚马逊的资本支出从 2023 年的 540 亿美元增至 2024 年的 840 亿美元,2025 年预计将达到 1180 亿美元。Meta 正为其数据中心争取 270 亿美元的信贷;甲骨文为履行 AI 合同计划每年借款 250 亿美元。在 AI 能带来可持续收入前科技巨头需要削减成本。

  3. Python 基金会在放弃美政府的 150 万美元拨款后收到了大量捐款

    上月底,Python 软件基金会(PSF)宣布坚守 DEI(多元化、平等及包容)价值观以及考虑到无法预测的财务风险,放弃了美国政府的 150 万美元拨款。此事备受关注而被广泛报道,当天基金会就收到了大约 300 笔捐款,第二天还有 Reddit 用户抱怨尝试捐款时遭遇超时。上周五,基金会执行董事 Deb Nicholson 披露他们至今收到了逾 15.7 万美元捐款,包括 295 名每年捐款 99 美元的新支持会员。虽然这些捐款尚不足以填补 150 万美元的缺口,但基金会表示意义重大,他们感受到了来自社区的强有力支持。

  4. 服用褪黑素可能有风险

    美国心脏协会科学年会上发表的一项初步研究发现,相比未服用褪黑素补充剂的人,服用褪黑素一年或更长时间的慢性失眠患者更容易发生心力衰竭、因心力衰竭住院以及死亡。褪黑素是由松果体分泌的一种激素,负责调节人体的睡眠清醒周期。其水平在黑暗中自然升高,在白天下降。人工合成的褪黑素与天然激素的化学结构相同,被广泛用于治疗失眠和时差反应。在很多国家,褪黑素补充剂无需处方即可购买。研究人员强调,需要开展更多研究去充分了解褪黑素对心脏健康的影响,并确保其安全使用。

  5. KeePassXC 不会加入 AI 功能

    开源密码管理器项目 KeePassXC 更新了有关使用生成式 AI 的政策,开发者强调 KeePassXC 不会提供任何 AI 功能,但会使用 GitHub Copilot 等 AI 工具去处理简单的任务,比如用 Copilot 起草简单 bug 修复和 UI 变化的 pull request。由于 AI 不擅长处理复杂任务,开发者表示会谨慎使用 Copilot,会通过标准的审核流程去发现 AI 可能产生的错误。

  6. 伊朗遭遇空前的旱情

    伊朗尤其是首都德黑兰正遭遇空前的旱情,降雨量创历史新低,水库几乎干涸,官员呼吁民众节约用水,总统 Masoud Pezeshkian 警告如果旱情短期内无法缓解,德黑兰可能实行限水,而如果限水不起作用,可能不得不疏散德黑兰。气象官员表示未来 10 天内预计不会有降雨。Latian 水坝是德黑兰主要水源之一,目前水库蓄水量不足 10%。附近的 Karaj 水坝情况类似。Karaj 水坝负责人 Mohammad-Ali Moallem 表示,今年降雨量相比去年减少了 92%,水库蓄水量只剩下 8%,大部分是无法使用的“死水”。伊朗第二大城市马什哈德也面临类似的旱情。德黑兰、Karaj 和马什哈德共有逾 1600 万人口。

  7. 律师用 AI 生成虚假案例屡禁不止

    美国律师滥用 AI 生成虚假案例屡禁不止,越来越多的法庭文件被发现滥用了 AI。今年早些时候一名律师向德州破产法庭递交动议,引用了名为“Brasher v. Stewart”的 1985 年案例,但该案例并不存在,是 AI 虚构的。法官严厉批评了这名律师,将其交给州律协的纪律委员会,责令其接受六小时的 AI 培训。法国律师兼研究员 Damien Charlotin 今年四月建立了一个在线数据库,跟踪了这种滥用 AI 生成虚构案例的事件。一开始数据库每个月只记录到三到四个案例,如今每天都有三到四个,目前已记录到了 509 个案例。法庭对律师的处罚并没有起到威慑作用。

  8. 苹果下架 Blued 和翻咔

    苹果上周末在政府监管部门要求下从其应用商店下架了 LGBTQ+ 社交应用 Blued 与翻咔。国内 Android 应用商店也同步下架了这两款应用。目前已安装这两款应用的用户仍然能正常使用。

  9. X 的算法放大极右翼账户

    Sky News 与数字咨询公司 411 合作训练了一个大模型将内容分类为政治类和非政治类,以及哪些内容偏左哪些偏右哪些中性,随后创建了 9 个新账号,三个左翼账号、三个右翼账号和三个政治中立账号,然后跟踪这些账户在一个月内展示的“For You”内容。“For You”是 X 的推荐算法向用户推荐的内容。结果显示,所有账户的“For You”展示的右翼内容要比左翼或中立内容都多,而右翼账号看到了更多右翼内容,中立账户看到的右翼内容是左翼的两倍。这一结果并不出人意料,X 在马斯克的统治下已经是一个著名的右翼平台了,左翼用户很多都转向了 Bluesky 或 Mastodon。

  10. 研究发现 AI 回复过于友好而很容易分辨

    下次你在社交媒体上遇到非常有礼貌的回复,不妨仔细检查下。它可能是一次 AI 模型试图融入人类但失败的尝试。苏黎世大学、阿姆斯特丹大学、杜克大学和纽约大学的研究人员在预印本平台 arXiv 上发表了一篇论文,指出在社交媒体的对话中 AI 模型和人类仍然很容易区分,原因是 AI 有一个非常明显特征:语气过于友好。研究人员在 Twitter/X、Bluesky 和 Reddit 上测试了九个开放权重的大模型:Llama 3.1 8B、Llama 3.1 8B Instruct、Llama 3.1 70B、Mistral 7B v0.1、Mistral 7B Instruct v0.2、Qwen 2.5 7B Instruct、Gemma 3 4B Instruct、DeepSeek-R1-Distill-Llama-8B 和 Apertus-8B-2509,发现他们开发的分类器能以 70%-80% 的准确率识别出 AI 生成的回复。

  11. Common Crawl 被批为 AI 公司提供高质量付费墙文章

    成立于 2007 年的非盈利组织 Common Crawl 致力于存档互联网,它至今抓取了数以十亿计的网页。但最近几年它引发了争议,其巨大的存档库被 AI 公司如 OpenAI、Google、Anthropic、Nvidia、Meta 和 Amazon 用于训练大模型。Common Crawl 为 AI 公司打开了一扇后门,允许它们使用高质量付费墙文章训练模型,并在抓取付费墙文章上撒谎。Common Crawl 声称它不会绕过付费墙,会应新闻出版商要求删除其内容,但实际上并非如此。Common Crawl 执行董事 Rich Skrenta 对此回应称,新闻出版商如果不想它们的内容被抓取,就不应该将内容发布到网上。他说,Common Crawl 的爬虫不会登陆其抓取的网站,但一部分付费墙机制不会影响它的爬虫。比如很多网站在执行付费墙代码前会短暂允许浏览器访问全文,然后代码检查访客是不是付费用户,如果不是就隐藏内容。Common Crawl 的爬虫不会执行付费墙代码,因此能直接阅读全文。过去一年 Common Crawl 的 CCBot 如今已成为流行网站屏蔽最广泛的抓取程序。

  12. 宇宙膨胀或许在减速而非加速

    根据发表在《皇家天文学会月刊》上的一项研究,宇宙的膨胀速度或许已开始放缓,并非如以前所认为的持续加速。这项新发现对暗能量正推动遥远星系加速远离的理论提出了挑战。若这项结果获得确认,将可能开启关于暗能量本质、解决哈勃张力(Hubble tension)、以及理解宇宙过去与未来的全新篇章。过去 30 年天文学界普遍认为宇宙的膨胀速度正以不断增加,这种膨胀是由暗能量驱动。然而南韩延世大学团队提出了新的证据,表明 Ia 型超新星实际上受到其前身恒星年龄的影响。团队发现,即使经过亮度标准化处理,来自较年轻恒星族群的超新星看起来仍然系统性地较暗,而来自较老恒星族群的超新星则更亮。研究团队以 300 个星系的大样本为基础,以 99.999% 的置信度验证了超新星的年龄偏差效应(age bias effect),显示遥远超新星的变暗并非仅由宇宙学效应造成,还受到恒星演化物理的显著影响。

  13. 柯林斯词典的年度词是 Vibe Coding

    柯林斯词典(Collins Dictionary)的年度词是 Vibe Coding。Vibe Coding 这一术语由 OpenAI 联合创始人 Andrej Karpathy 在今年 2 月创造,意思是开发者不是自己写代码而是通过向 AI 聊天机器人描述需求去创造应用或网站。Vibe Coding 风靡一时,但很多人已经发现它并不能保证代码能正常运行或没有 bug。柯林斯词典总经理 Alex Beecroft 表示,该词完美诠释了语言随技术发展如何演变。其它上榜的词包括:Biohacking,通过改变人体自然生理过程改善健康和延寿的活动;Coolcation,在凉爽的地方度假;Glaze,过度或不恰当的赞美或奉承一个人;Henry,“high earner, not rich yet”的缩写,高收入但尚未积累大量财富的人;Micro-retirement,在两份工作之间安排追求个人兴趣的休息期;Taskmasking,假装高效工作。

  14. VLC 总裁 Jean-Baptiste Kempf 获欧洲自由软件奖

    VLC 总裁兼项目核心开发者 Jean-Baptiste Kempf 获得了欧洲自由软件奖,以表彰他在 VLC 项目上的长期贡献。VLC 诞生于 1996 年,最初是一个学生项目,如今已发展成为全球最流行媒体播放器之一,用户数以十亿计。Jean-Baptiste Kempf 在学生时代参与了 VLC 项目,在最早一批的开发者毕业项目面临死亡时,他接过了重担。他与其他核心开发者一起创造了我们今天所依赖于的播放器。

  15. 美国企业在裁员近百万的同时利润创历史新高

    美国企业今年至今裁员近百万,但与此同时企业利润增长和股市都创新高。投资研究公司 Alpine Macro 的首席全球策略师 Chen Zhao 将这种企业利润飙升和大规模裁员之间的脱节现象形容为“无就业繁荣(jobless boom)”。加速裁员通常发生在企业盈利能力下降需要削减成本的情况之下。Zhao 称这种现象以前从未看到过,与以往的历史剧本完全不同,亚马逊利润丰厚但却裁员三万人,这非常令人不解。他认为原因可能是 AI 提高了生产力降低了成本。但不是所有人认为 AI 是裁员潮的罪魁祸首,软件公司 Bullhorn 的 CEO Art Papas 认为大规模裁员是企业是在疫情期间过度招聘后进行的调整。