DIGEST · 2025-11-22

OrangeBot.AI Digest — 2025-11-22

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. We Induced Smells With Ultrasound (writetobrain.com)
  2. The Mozilla Cycle, Part III: Mozilla Dies in Ignominy (taggart-tech.com)
  3. WorldGen – Text to Immersive 3D Worlds (www.meta.com)
  4. Show HN: Forty.News – Daily news, but on a 40-year delay (forty.news)
  5. China reaches energy milestone by "breeding" uranium from thorium (www.scmp.com)
  6. $1900 Bug Bounty to Fix the Lenovo Legion Pro 7 16IAX10H's Speakers on Linux (github.com)
  7. The privacy nightmare of browser fingerprinting (kevinboone.me)
  8. In a U.S. First, New Mexico Opens Doors to Free Child Care for All (www.wsj.com)
  9. 'The French people want to save us': help pours in for glassmaker Duralex (www.theguardian.com)
  10. TiDAR: Think in Diffusion, Talk in Autoregression (arxiv.org)
  11. Agent design is still hard (lucumr.pocoo.org)
  12. ADHD and monotropism (2023) (monotropism.org)
  13. Roblox CEO Makes a Fool of Himself in Car-Crash Interview (kotaku.com)
  14. Kodak ran a nuclear device in its basement for decades (www.popularmechanics.com)
  15. The Connectivity Standards Alliance Announces Zigbee 4.0 and Suzi (csa-iot.org)

GitHub Trending(15)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/个人微信/飞书/钉钉/Telegram/邮件/ntfy推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

  7. HKUDS / LightRAG

    [EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"

  8. bobeff / open-source-games

    A list of open source games.

  9. volcengine / verl

    verl: Volcano Engine Reinforcement Learning for LLMs

  10. GibsonAI / Memori

    Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

  11. yangshun / tech-interview-handbook

    Curated coding interview preparation materials for busy software engineers

  12. microsoft / call-center-ai

    Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!

  13. MustardChef / WSABuilds

    Run Windows Subsystem For Android on your Windows 10 and Windows 11 PC using prebuilt binaries with Google Play Store (MindTheGapps) and/or Magisk or KernelSU (root solutions) built in.

  14. playcanvas / engine

    Powerful web graphics runtime built on WebGL, WebGPU, WebXR and glTF

  15. iptv-org / iptv

    Collection of publicly available IPTV channels from all over the world

Hugging Face(15)

  1. Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning

    Large Language Model (LLM) Agents, often trained with Reinforcement Learning (RL), are constrained by a dependency on human-curated data, limiting scalability and tethering AI to human knowledge. Existing self-evolution frameworks offer an alternative but are typically restricted by the model's inherent capabilities and single-round interactions, hindering the development of complex curricula involving tool use or dynamic reasoning. We introduce Agent0, a fully autonomous framework that evolves high-performing agents without external data through multi-step co-evolution and seamless tool integration. Agent0 establishes a symbiotic competition between two agents initialized from the same base LLM: a curriculum agent that proposes increasingly challenging frontier tasks, and an executor agent that learns to solve them. We integrate external tools to enhance the executor's problem-solving capacity; this improvement, in turn, pressures the curriculum agent to construct more complex, tool-aware tasks. Through this iterative process, Agent0 establishes a self-reinforcing cycle that continuously produces high-quality curricula. Empirically, Agent0 substantially boosts reasoning capabilities, improving the Qwen3-8B-Base model by 18% on mathematical reasoning and 24% on general reasoning benchmarks. Code is available at https://github.com/aiming-lab/Agent0.

  2. SAM 3D: 3Dfy Anything in Images

    We present SAM 3D, a generative model for visually grounded 3D object reconstruction, predicting geometry, texture, and layout from a single image. SAM 3D excels in natural images, where occlusion and scene clutter are common and visual recognition cues from context play a larger role. We achieve this with a human- and model-in-the-loop pipeline for annotating object shape, texture, and pose, providing visually grounded 3D reconstruction data at unprecedented scale. We learn from this data in a modern, multi-stage training framework that combines synthetic pretraining with real-world alignment, breaking the 3D "data barrier". We obtain significant gains over recent work, with at least a 5:1 win rate in human preference tests on real-world objects and scenes. We will release our code and model weights, an online demo, and a new challenging benchmark for in-the-wild 3D object reconstruction.

  3. V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models

    Recent progress in generative video models, such as Veo-3, has shown surprising zero-shot reasoning abilities, creating a growing need for systematic and reliable evaluation. We introduce V-ReasonBench, a benchmark designed to assess video reasoning across four key dimensions: structured problem-solving, spatial cognition, pattern-based inference, and physical dynamics. The benchmark is built from both synthetic and real-world image sequences and provides a diverse set of answer-verifiable tasks that are reproducible, scalable, and unambiguous. Evaluations of six state-of-the-art video models reveal clear dimension-wise differences, with strong variation in structured, spatial, pattern-based, and physical reasoning. We further compare video models with strong image models, analyze common hallucination behaviors, and study how video duration affects Chain-of-Frames reasoning. Overall, V-ReasonBench offers a unified and reproducible framework for measuring video reasoning and aims to support the development of models with more reliable, human-aligned reasoning skills.

  4. First Frame Is the Place to Go for Video Content Customization

    What role does the first frame play in video generation models? Traditionally, it's viewed as the spatial-temporal starting point of a video, merely a seed for subsequent animation. In this work, we reveal a fundamentally different perspective: video models implicitly treat the first frame as a conceptual memory buffer that stores visual entities for later reuse during generation. Leveraging this insight, we show that it's possible to achieve robust and generalized video content customization in diverse scenarios, using only 20-50 training examples without architectural changes or large-scale finetuning. This unveils a powerful, overlooked capability of video generation models for reference-based video customization.

  5. Step-Audio-R1 Technical Report

    Recent advances in reasoning models have demonstrated remarkable success in text and vision domains through extended chain-of-thought deliberation. However, a perplexing phenomenon persists in audio language models: they consistently perform better with minimal or no reasoning, raising a fundamental question - can audio intelligence truly benefit from deliberate thinking? We introduce Step-Audio-R1, the first audio reasoning model that successfully unlocks reasoning capabilities in the audio domain. Through our proposed Modality-Grounded Reasoning Distillation (MGRD) framework, Step-Audio-R1 learns to generate audio-relevant reasoning chains that genuinely ground themselves in acoustic features rather than hallucinating disconnected deliberations. Our model exhibits strong audio reasoning capabilities, surpassing Gemini 2.5 Pro and achieving performance comparable to the state-of-the-art Gemini 3 Pro across comprehensive audio understanding and reasoning benchmarks spanning speech, environmental sounds, and music. These results demonstrate that reasoning is a transferable capability across modalities when appropriately anchored, transforming extended deliberation from a liability into a powerful asset for audio intelligence. By establishing the first successful audio reasoning model, Step-Audio-R1 opens new pathways toward building truly multimodal reasoning systems that think deeply across all sensory modalities.

  6. Scaling Spatial Intelligence with Multimodal Foundation Models

    Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.7% on VSI-Bench, 43.3% on MMSI, 85.6% on MindCube, 54.6% on ViewSpatial, and 50.1% on SITE, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction.

  7. Video-as-Answer: Predict and Generate Next Video Event with Joint-GRPO

    While language models have become impactful in many real-world applications, video generation remains largely confined to entertainment. Motivated by video's inherent capacity to demonstrate physical-world information that is difficult to convey through language alone (e.g., imagine teaching someone to tie a tie using only text), we identify an underutilized opportunity to extend video as a new answer modality for Next-Event Prediction (NEP), formalized as Video-Next-Event Prediction (VNEP). While the established NEP task takes a video with a procedural or predictive question as input to predict the next event in text, VNEP requires dynamic video responses. This shift from telling to showing unlocks more intuitive and customized answers for procedural learning and creative exploration. However, this task remains challenging for existing models, as it demands an understanding of multimodal input, instruction-conditioned reasoning, and the generation of video with visual and semantic consistency. To address this, we introduce VANS, a model that leverages reinforcement learning to align a Vision-Language Model (VLM) with a Video Diffusion Model (VDM) for VNEP. The core of VANS is our proposed Joint-GRPO that orchestrates the VLM and VDM to function as a unit. Driven by a shared reward on their respective output, it optimizes the VLM to produce captions that are both accurate and friendly to visualize, while guiding the VDM to generate videos that are faithful to these captions and the input visual context. To enable this learning, we craft VANS-Data-100K, a dedicated dataset for the VNEP task. Experiments on procedural and predictive benchmarks demonstrate that VANS achieves state-of-the-art performance in both video event prediction and visualization. Codes are released in https://github.com/KlingTeam/VANS.

  8. MiMo-Embodied: X-Embodied Foundation Model Technical Report

    We open-source MiMo-Embodied, the first cross-embodied foundation model to successfully integrate and achieve state-of-the-art performance in both Autonomous Driving and Embodied AI. MiMo-Embodied sets new records across 17 embodied AI benchmarks in Task Planning, Affordance Prediction and Spatial Understanding, while also excelling in 12 autonomous driving benchmarks across Environmental Perception, Status Prediction, and Driving Planning. Across these tasks, MiMo-Embodied significantly outperforms existing open-source, closed-source, and specialized baselines. Our results indicate that through multi-stage learning, curated data construction, and CoT/RL fine-tuning, these two domains exhibit strong positive transfer and mutually reinforce one another. We provide a detailed analysis of our model design and training methodologies to facilitate further research. Code and models are available at https://github.com/XiaomiMiMo/MiMo-Embodied.

  9. Generalist Foundation Models Are Not Clinical Enough for Hospital Operations

    Hospitals and healthcare systems rely on operational decisions that determine patient flow, cost, and quality of care. Despite strong performance on medical knowledge and conversational benchmarks, foundation models trained on general text may lack the specialized knowledge required for these operational decisions. We introduce Lang1, a family of models (100M-7B parameters) pretrained on a specialized corpus blending 80B clinical tokens from NYU Langone Health's EHRs and 627B tokens from the internet. To rigorously evaluate Lang1 in real-world settings, we developed the REalistic Medical Evaluation (ReMedE), a benchmark derived from 668,331 EHR notes that evaluates five critical tasks: 30-day readmission prediction, 30-day mortality prediction, length of stay, comorbidity coding, and predicting insurance claims denial. In zero-shot settings, both general-purpose and specialized models underperform on four of five tasks (36.6%-71.7% AUROC), with mortality prediction being an exception. After finetuning, Lang1-1B outperforms finetuned generalist models up to 70x larger and zero-shot models up to 671x larger, improving AUROC by 3.64%-6.75% and 1.66%-23.66% respectively. We also observed cross-task scaling with joint finetuning on multiple tasks leading to improvement on other tasks. Lang1-1B effectively transfers to out-of-distribution settings, including other clinical tasks and an external health system. Our findings suggest that predictive capabilities for hospital operations require explicit supervised finetuning, and that this finetuning process is made more efficient by in-domain pretraining on EHR. Our findings support the emerging view that specialized LLMs can compete with generalist models in specialized tasks, and show that effective healthcare systems AI requires the combination of in-domain pretraining, supervised finetuning, and real-world evaluation beyond proxy benchmarks.

  10. Nemotron Elastic: Towards Efficient Many-in-One Reasoning LLMs

    Training a family of large language models targeting multiple scales and deployment objectives is prohibitively expensive, requiring separate training runs for each different size. Recent work on model compression through pruning and knowledge distillation has reduced this cost; however, this process still incurs hundreds of billions of tokens worth of training cost per compressed model. In this paper, we present Nemotron Elastic, a framework for building reasoning-oriented LLMs, including hybrid Mamba-Attention architectures, that embed multiple nested submodels within a single parent model, each optimized for different deployment configurations and budgets. Each of these submodels shares weights with the parent model and can be extracted zero-shot during deployment without additional training or fine-tuning. We enable this functionality through an end-to-end trained router, tightly coupled to a two-stage training curriculum designed specifically for reasoning models. We additionally introduce group-aware SSM elastification that preserves Mamba's structural constraints, heterogeneous MLP elastification, normalized MSE-based layer importance for improved depth selection, and knowledge distillation enabling simultaneous multi-budget optimization. We apply Nemotron Elastic to the Nemotron Nano V2 12B model, simultaneously producing a 9B and a 6B model using only 110B training tokens; this results in over 360x cost reduction compared to training model families from scratch, and around 7x compared to SoTA compression techniques. Each of the nested models performs on par or better than the SoTA in accuracy. Moreover, unlike other compression methods, the nested capability of our approach allows having a many-in-one reasoning model that has constant deployment memory against the number of models in the family.

  11. SRPO: Self-Referential Policy Optimization for Vision-Language-Action Models

    Vision-Language-Action (VLA) models excel in robotic manipulation but are constrained by their heavy reliance on expert demonstrations, leading to demonstration bias and limiting performance. Reinforcement learning (RL) is a vital post-training strategy to overcome these limits, yet current VLA-RL methods, including group-based optimization approaches, are crippled by severe reward sparsity. Relying on binary success indicators wastes valuable information in failed trajectories, resulting in low training efficiency. To solve this, we propose Self-Referential Policy Optimization (SRPO), a novel VLA-RL framework. SRPO eliminates the need for external demonstrations or manual reward engineering by leveraging the model's own successful trajectories, generated within the current training batch, as a self-reference. This allows us to assign a progress-wise reward to failed attempts. A core innovation is the use of latent world representations to measure behavioral progress robustly. Instead of relying on raw pixels or requiring domain-specific fine-tuning, we utilize the compressed, transferable encodings from a world model's latent space. These representations naturally capture progress patterns across environments, enabling accurate, generalized trajectory comparison. Empirical evaluations on the LIBERO benchmark demonstrate SRPO's efficiency and effectiveness. Starting from a supervised baseline with 48.9% success, SRPO achieves a new state-of-the-art success rate of 99.2% in just 200 RL steps, representing a 103% relative improvement without any extra supervision. Furthermore, SRPO shows substantial robustness, achieving a 167% performance improvement on the LIBERO-Plus benchmark.

  12. Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation

    Recent advances in visual generation have increasingly explored the integration of reasoning capabilities. They incorporate textual reasoning, i.e., think, either before (as pre-planning) or after (as post-refinement) the generation process, yet they lack on-the-fly multimodal interaction during the generation itself. In this preliminary study, we introduce Thinking-while-Generating (TwiG), the first interleaved framework that enables co-evolving textual reasoning throughout the visual generation process. As visual content is progressively generating, textual reasoning is interleaved to both guide upcoming local regions and reflect on previously synthesized ones. This dynamic interplay produces more context-aware and semantically rich visual outputs. To unveil the potential of this framework, we investigate three candidate strategies, zero-shot prompting, supervised fine-tuning (SFT) on our curated TwiG-50K dataset, and reinforcement learning (RL) via a customized TwiG-GRPO strategy, each offering unique insights into the dynamics of interleaved reasoning. We hope this work inspires further research into interleaving textual reasoning for enhanced visual generation. Code will be released at: https://github.com/ZiyuGuo99/Thinking-while-Generating.

  13. TurkColBERT: A Benchmark of Dense and Late-Interaction Models for Turkish Information Retrieval

    Neural information retrieval systems excel in high-resource languages but remain underexplored for morphologically rich, lower-resource languages such as Turkish. Dense bi-encoders currently dominate Turkish IR, yet late-interaction models -- which retain token-level representations for fine-grained matching -- have not been systematically evaluated. We introduce TurkColBERT, the first comprehensive benchmark comparing dense encoders and late-interaction models for Turkish retrieval. Our two-stage adaptation pipeline fine-tunes English and multilingual encoders on Turkish NLI/STS tasks, then converts them into ColBERT-style retrievers using PyLate trained on MS MARCO-TR. We evaluate 10 models across five Turkish BEIR datasets covering scientific, financial, and argumentative domains. Results show strong parameter efficiency: the 1.0M-parameter colbert-hash-nano-tr is 600times smaller than the 600M turkish-e5-large dense encoder while preserving over 71\% of its average mAP. Late-interaction models that are 3--5times smaller than dense encoders significantly outperform them; ColmmBERT-base-TR yields up to +13.8\% mAP on domain-specific tasks. For production-readiness, we compare indexing algorithms: MUVERA+Rerank is 3.33times faster than PLAID and offers +1.7\% relative mAP gain. This enables low-latency retrieval, with ColmmBERT-base-TR achieving 0.54 ms query times under MUVERA. We release all checkpoints, configs, and evaluation scripts. Limitations include reliance on moderately sized datasets (leq50K documents) and translated benchmarks, which may not fully reflect real-world Turkish retrieval conditions; larger-scale MUVERA evaluations remain necessary.

  14. NaTex: Seamless Texture Generation as Latent Color Diffusion

    We present NaTex, a native texture generation framework that predicts texture color directly in 3D space. In contrast to previous approaches that rely on baking 2D multi-view images synthesized by geometry-conditioned Multi-View Diffusion models (MVDs), NaTex avoids several inherent limitations of the MVD pipeline. These include difficulties in handling occluded regions that require inpainting, achieving precise mesh-texture alignment along boundaries, and maintaining cross-view consistency and coherence in both content and color intensity. NaTex features a novel paradigm that addresses the aforementioned issues by viewing texture as a dense color point cloud. Driven by this idea, we propose latent color diffusion, which comprises a geometry-awared color point cloud VAE and a multi-control diffusion transformer (DiT), entirely trained from scratch using 3D data, for texture reconstruction and generation. To enable precise alignment, we introduce native geometry control that conditions the DiT on direct 3D spatial information via positional embeddings and geometry latents. We co-design the VAE-DiT architecture, where the geometry latents are extracted via a dedicated geometry branch tightly coupled with the color VAE, providing fine-grained surface guidance that maintains strong correspondence with the texture. With these designs, NaTex demonstrates strong performance, significantly outperforming previous methods in texture coherence and alignment. Moreover, NaTex also exhibits strong generalization capabilities, either training-free or with simple tuning, for various downstream applications, e.g., material generation, texture refinement, and part segmentation and texturing.

  15. SAM2S: Segment Anything in Surgical Videos via Semantic Long-term Tracking

    Surgical video segmentation is crucial for computer-assisted surgery, enabling precise localization and tracking of instruments and tissues. Interactive Video Object Segmentation (iVOS) models such as Segment Anything Model 2 (SAM2) provide prompt-based flexibility beyond methods with predefined categories, but face challenges in surgical scenarios due to the domain gap and limited long-term tracking. To address these limitations, we construct SA-SV, the largest surgical iVOS benchmark with instance-level spatio-temporal annotations (masklets) spanning eight procedure types (61k frames, 1.6k masklets), enabling comprehensive development and evaluation for long-term tracking and zero-shot generalization. Building on SA-SV, we propose SAM2S, a foundation model enhancing SAM2 for Surgical iVOS through: (1) DiveMem, a trainable diverse memory mechanism for robust long-term tracking; (2) temporal semantic learning for instrument understanding; and (3) ambiguity-resilient learning to mitigate annotation inconsistencies across multi-source datasets. Extensive experiments demonstrate that fine-tuning on SA-SV enables substantial performance gains, with SAM2 improving by 12.99 average J\&F over vanilla SAM2. SAM2S further advances performance to 80.42 average J\&F, surpassing vanilla and fine-tuned SAM2 by 17.10 and 4.11 points respectively, while maintaining 68 FPS real-time inference and strong zero-shot generalization. Code and dataset will be released at https://jinlab-imvr.github.io/SAM2S.

Solidot(15)

  1. 伊朗总统表示要迁都

    在日益加剧的生态危机和水资源严重短缺的打击下,伊朗总统 Masoud Pezeshkian 周四表示德黑兰已经无法承担首都之职,伊朗别无选择只能迁都。伊朗官员考虑将首都迁至南部沿海地区。但专家表示此举并不能改变近千万德黑兰居民的现状,他们正遭受数十年来供水量持续下降带来的后果。伊朗几个世纪以来多次迁都,这次是首次因为生态灾难而迁都。康奈尔大学社会科学家兼城市规划师 Linda Shi 表示:气候变化并不是造成这一情况的原因,但它却是一个方便的借口,可用于逃避糟糕政治决策的责任。至少从 2008 年起,科学家就警告,伊朗城市和农业无节制抽取地下水正迅速耗尽该国的蓄水层。蓄水层每年损失约 17 亿立方米的水。气候变化确实是方便的借口。

  2. Thunderbird Pro 测试每月 9 美元的付费服务

    开源邮件客户端 Thunderbird 项目开始在生产环境测试付费服务 Thunderbird Pro。该服务为每月 9 美元,包括了邮件托管、Send 加密文件共享和 Appointment 日程安排。用户支付 9 美元可获得:30 GB 邮件储存、300 GB Send 储存,15 个 Email 地址以及 3 个自定义域名。有很多人认为该服务定价过高。

  3. 英国陆军将用《使命召唤》训练士兵

    英国网络与特种作战司令部副司令 Sir Tom Copinger-Symes 将军表示,乌克兰战争证明了精通游戏的士兵的价值,在这场战争中,遥控无人机至关重要。英国国防部周五宣布了与国防相关的电竞比赛 International Defence Esports Games(IDEG),各国的未来网络战士将同台竞技。除了体验热门游戏《使命召唤》外,IDEG 参与者还将参加无人机模拟游戏 VelociDrone 的比赛。游戏模拟了乌克兰战场常见场景,该游戏正被用于训练英国的无人机操作人员。英国国防部表示,游戏改进了操作人员的目标定位精度和反应速度,以对俄罗斯军队造成致命效果。

  4. 如何关闭 Google 应用中的 Gemini AI

    Google 被发现默认启用了“在 Gmail、Chat 和 Meet 中启用智能功能”选项,默认启用“Google Workspace 智能功能”。根据 Google 的说明,“启用此设置即表示您同意 Gmail、Google Chat 和 Google Meet 使用您在这些产品中的内容和活动记录,为您提供智能功能和个性化体验”,以及“启用此设置即表示您允许 Google Workspace 使用您的 Workspace 内容和活动记录,从而在 Workspace 中为您提供个性化体验。Workspace 包含适用于企业和学校的多个应用,例如 Gmail、Chat、Meet、云端硬盘等。”用户需要手动选择才能取消这些 AI 功能,方法是 Gmail ——> 设置 ——> 查看所有设置——> 智能功能,去除“在 Gmail、Chat 和 Meet 中启用智能功能”勾选框,在重启应用之后,在智能功能下打开“管理 Google Workspace 智能功能设置”,去除所有勾选框。

  5. Google 曝光 APT24 使用的 BadAudio 恶意程序

    Google Threat Intelligence Group(GTIG) 通过官方博客曝光了 APT24 间谍组织使用的 BadAudio 恶意程序。APT24 过去三年在受害者网络部署了此前未有记录的 BadAudio——一种高度混淆的第一阶段下载工具,用于建立持久访问权限。APT24 利用了供应链入侵、多层社会工程攻击以及滥用合法云服务如 Google Drive 和 OneDrive,展示了其攻击能力的持续演进。举例来说,从 2024 年 7 月起,APT24 多次入侵了一家台湾的数字营销公司,该公司为客户网站提供了 JS 库。APT24 通过入侵该公司,将恶意的 JS 代码注入到该公司的一个广泛使用的 JS 库,它还使用误植域名(Typosquatting)冒充合法 CDN 的域名。

  6. 苔藓孢子在空间站外生活 9 个月后仍有繁殖能力

    苔藓能在地球上最极端的环境中生长。受此启发,研究人员将苔藓孢子体(即包裹孢子的繁殖结构)送往迄今为止最极端的环境:太空。日本研究人员报告,超过 80% 的孢子在国际空间站(ISS)外生存了 9 个多月,返回地球后仍具备繁殖能力,首次证明了早期陆生植物能长期承受太空环境的考验。研究人员测试了苔藓的 3 种不同结构——原丝体(即幼年苔藓)、繁殖细胞(在压力条件下产生的特化干细胞)和孢子体,以找出哪种在太空中的存活几率最高。他们发现,紫外线辐射是最难克服的生存要素,而孢子体是 3 种苔藓结构中最具韧性的。幼年苔藓在高强度紫外线或极端温度下无一存活。繁殖细胞的存活率虽略高,但孢子对紫外线辐射的耐受性高出约 1000 倍。此外,孢子在零下 196°C 下暴露超过一周后仍能存活并发芽,在 55°C高温下存活一个月后也同样如此。研究团队认为,孢子周围的结构起到了保护屏障的作用,能吸收紫外线辐射,并在物理和化学层面双重保护内部孢子免受损伤。研究人员预测,在太空环境下,包裹的孢子最多可存活 5600 天——约 15 年。他们强调这个数字仅为粗略估算,要准确预测苔藓在太空中的存活时长,还需要更大的数据集。

  7. 表情符号是如何发明的?

    1982 年 9 月 16 日,卡内基梅隆大学计算机科学家 Neil Swartz 在学校 BBS 上谈论了一个物理学问题:一根蜡烛和一滴水银在自由落体的电梯中会发生什么?当天晚上计算机科学家 Howard Gayle 在回复中开了一个玩笑:警告!由于一项物理实验,一部电梯“被水银污染”以及因火灾遭受“轻微损坏”。尽管有人澄清这是玩笑,但还是有人信以为真。此事引发了如何避免误解的争论。9 月 17 日 Swartz 建议玩笑性质的主题用 * 标记;其他人提议用 % 替代 *;还有人结合两者: % 代表有趣的笑话,* 代表糟糕的笑话(bad jokes);还有人提议用 & 或 #。此时一些人已经在使用类似笑脸的符号 \__/ 去标记笑话,但这个符号当时并不流行。 9 月 19 日,计算机科学研究助理教授 Scott Fahlman 提议用 :-) 标记笑话,用 :-( 标记严肃的评论。这一提议 迅速通过 ARPAnet 流行开来,逐渐成为在线交流的常用符号,两个符号有时会省略连接符简写为 :) 或 :(。Fahlman 可能并非是该表情符号的第一个提出者,但他在恰当的语境恰当的时间提出了恰当的解决方案。

  8. CrossOver Preview 宣布支持 Linux ARM64

    kgnix 写道:月初 CodeWeavers 发布最新的 CrossOver Preview 版本,带来了一个具有里程碑意义的突破:对 Linux ARM64 平台的原生支持。这是 CrossOver 在兼容层技术道路上的重要一步,也标志着 Wine 生态在新架构适配方面的又一次成熟。 CodeWeavers 是一家长期致力于为 macOS 与 Linux 用户和企业提供基于 Wine 技术的 Windows 应用兼容方案的技术公司。其旗下产品 CrossOver 建立在开源项目 Wine 之上,通过额外的整合和优化,为用户提供更完善、更便捷的运行体验。CodeWeavers 也是 Wine 项目的长期贡献者和 Steam Proton 项目的开发者。 CrossOver 产品经理 Meredith 表示,ARM64 支持是一个历时数年的工程。在 Wine 8.0、9.0、10.0 的数次版本迭代中,团队陆续完成了关键组件的开发,包括:PE 转换、WoW64 thunks、ARM64EC 模式、二进制翻译、FEX 模拟器集成,以及为 LLVM 等工具链补齐 ARM64EC 支持。这些技术的逐步落地才使得今天的 ARM64 CrossOver 成为可能。 目前,CrossOver ARM64 已能在 System76 Thelio Astra 这样的 ARM64 桌面平台上流畅运行《赛博朋克 2077》《对马岛之魂》等 3A 级游戏,性能表现令人惊喜。然而 CodeWeavers 的愿景似乎并不止于游戏领域。团队表示,他们期待未来 CrossOver ARM64 能成为企业迁移现有 Windows 工作环境至 Linux 的可行方案,以进一步提升安全性、降低维护复杂度并减少系统臃肿。 与此同时,Valve 刚刚发布了基于 ARM64 + Linux、同样借助 Wine 技术来运行 x86 Windows 游戏的设备——Steam Frame。两个产品几乎同期亮相,不禁让人思考:基于 ARM 架构的芯片,是否正准备在消费级与企业级 PC 市场上,与传统 x86 展开更直接、全面的竞争?

  9. 慢性病在动物中蔓延

    癌症、肥胖症、糖尿病和退行性关节病等已不再仅是人类的问题,如今它们正在动物王国里传播。在全球范围内,包括家养宠物、家畜和海洋生物在内的各种动物都出现了严重的健康问题。研究发现,遗传倾向会增加某些动物的患病风险。为了外观而选择性繁育的狗和猫,以及为提高生产力而繁育的家畜,患糖尿病和二尖瓣疾病的可能性更大。而环境因素,包括不良饮食、缺乏身体活动及长期压力,也会影响这些疾病在不同物种中出现的方式和时间。这些例子在许多环境中都能找到。家养的猫和狗肥胖现象十分普遍。最近的调查估计,有50%到60%的猫和狗属于超重范畴,这导致了猫糖尿病的发病率逐年上升。在农业环境中,约20%的圈养猪会患上骨关节炎。海洋动物也面临着类似的挑战:白鲸有患胃肠癌的记录,而养殖的大西洋鲑鱼则会患上心肌病综合征。生活在受工业化学品,如多环芳烃和多氯联苯污染的河口的野生动物,其肝癌发病率高达15%至25%。研究指出,人为驱动的生态转型正在加剧这些威胁。

  10. 撞击地球的行星忒伊亚可能更靠近太阳

    根据发表在《科学》期刊上的一项研究,通过追踪月球和地球岩石中的铁同位素指纹,试图揭开月球神秘前身起源之谜的研究人员为月球起源于内太阳系的观点增添了证据。研究结果披露,忒伊亚(Theia)——这颗因与地球相撞而形成月球的火星大小的行星体的诞生处可能比地球更靠近太阳。月球被认为是在太阳系形成后约一亿年时因忒伊亚与早期地球碰撞而形成的。大多数关于这一过程的模型提示,月球主要由来自这颗远古撞击体的物质构成。如果忒伊亚的同位素组成与地球不同,那么月球的同位素组成也应该与地球不同。这种同位素差异可揭示某行星体在太阳系中的起源处,从而可为了解忒伊亚的诞生处提供线索。然而对月球岩石的分析表明,月球和地球在许多元素的同位素组成上几乎等同。尽管有多种模型竞相尝试对这种相似性做出解释,但由于缺乏清晰的同位素差异以及其成因尚不确定,因此要确认忒伊亚最初形成于何处颇具挑战性。最新分析结果显示,地球和月球的铁同位素组成如出一辙,忒伊亚可能起源于内太阳系,其形成位置甚至比原地球更接近太阳。

  11. Xubuntu.org 披露网站被入侵细节

    在邮件列表上,Xubuntu 项目披露网站被入侵细节。Xubuntu 官网是在上个月中旬被入侵植入了名为 Xubuntu-Safe-Download.zip 的恶意文件,开发者称攻击者是利用 WordPress 的一个存在弱点的组件使用暴力破解的方式获得了网站访问权限,这次事件只涉及下载站及其提供的 Torrent 链接,其它都未受影响,Xubuntu 的构建系统、软件包或其它组件都未受到影响。如果用户下载了 Xubuntu-Safe-Download.zip 建议立即删除,使用安全软件扫描系统。

  12. 美国 CDC 网页宣称疫苗与自闭症相关

    在反疫苗卫生部长 Robert F. Kennedy Jr.治下,美国疾控中心 CDC 撤下了列举大量证据驳斥疫苗会导致自闭症的网页,改为宣称疫苗与自闭症相关。此举无疑会受到反疫苗者的欢迎,但只会进一步加剧公众的不信任、恐惧和困惑,进一步削弱美国已岌岌可危的疫苗接种率,导致更多可预防感染所引发的疾病、痛苦和死亡,尤其是在儿童和弱势群体中间。匿名 CDC 官员称,该机构的资深科学家并不知道网页更新,也没有就更新内容被咨询过。

  13. 微软公开 Zork 系列游戏源代码

    微软与 Team Xbox 和 Activision 合作,在 MIT 许可证下发布了 Zork I、II 和 III 的源代码。每个库都包含原始源代码和相关文档。Zork(魔域)系列游戏属于文字冒险游戏,最初于 1977 年在 PDP-10 大型机上首次推出,之后开发者成立了 Infocom 公司,将游戏扩展为三部曲——《魔域Ⅰ:地下大帝国》、《魔域Ⅱ:弗罗博兹巫师》、《魔域Ⅲ:地下城主》,于 1980 年登陆 PC。在游戏中,玩家输入文字指令,让角色穿越数百个地点并解决谜题和收集宝藏。游戏程序则扮演叙述者的角色,向玩家描述他们的位置以及命令的结果。游戏被誉为最著名的交互式小说(文字冒险游戏)。Activision 于 1986 年收购了 Infocom 获得了 Zork 游戏的所有权。

  14. 惠普和戴尔禁用了部分电脑的 HEVC 硬件解码功能

    惠普和戴尔被发现禁用了部分型号笔记本电脑的 HEVC 硬件解码功能,此举可能与 HEVC 许可费用上涨有关。今天几乎所有英特尔和 AMD 的 CPU 都支持 HEVC 硬件解码,但惠普和戴尔的用户发现他们无法在浏览器中播放 HEVC/H.265 内容。受影响的惠普型号包括 HP ProBook 460 G11、ProBook 465 G11 和 EliteBook 665 G11。惠普发言人建议客户使用支持 HEVC(H.265)解码的第三方软件方案。HEVC 许可费用将从明年 1 月开始涨价。在美国超过 100,001 台 HEVC 设备的专利费率将从每台 0.20 美元上涨至每台 0.24 美元。而根据 Gartner 的统计数据,惠普在今年三季度售出了 15,002,000 台笔记本电脑和台式机,戴尔售出了 10,166,000 台笔记本电脑和台式机。

  15. 冷冻妻子的男子结交新伴侣引发争论

    八年前,桂军民的妻子展文莲因患肺癌生命垂危,他做了一个离经叛道的决定:他将妻子的身体完完整整地冷冻储存,期待迎接她的苏醒。展文莲成为中国本土首个“冷冻人”。在山东银丰生命科学研究院里,储存她的容器被标记为“1号罐”——里面零下 196℃ 的液氮让时间趋于静止,也让一个普通家庭与科技创造永生的念想紧密相连。桂军民从来不会用“死”形容妻子。在他口中,妻子只是睡着了,要一直睡到医学能攻克肺癌的那一天。“不然(复活后)又遭一遍罪,没有任何意义。”展文莲的人体冷冻协议,签了30年。在等待展文莲复活的日子里,桂军民的生活有了些变化。他上过两次手术台;身边多了个女友。一部分人认为他开始新生活是人之常情,而另一部分人则认为他自私自利,既不尊重亡妻,也不尊重现任伴侣。