DIGEST · 2025-10-04

OrangeBot.AI Digest — 2025-10-04

51 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. The UK is still trying to backdoor encryption for Apple users (www.eff.org)
  2. ProofOfThought: LLM-based reasoning using Z3 theorem proving (github.com)
  3. Self-hosting email like it's 1984 (maxadamski.com)
  4. Flock's gunshot detection microphones will start listening for human voices (www.eff.org)
  5. A comparison of Ada and Rust, using solutions to the Advent of Code (github.com)
  6. How I influence tech company politics as a staff software engineer (www.seangoedecke.com)
  7. Circular Financing: Does Nvidia's $110B Bet Echo the Telecom Bubble? (tomtunguz.com)
  8. Google removes ICE-spotting app following Apple's ICEBlock crackdown (www.theverge.com)
  9. The Buchstabenmuseum Berlin is closing (www.buchstabenmuseum.de)
  10. Scientists are discovering a powerful new way to prevent cancer (www.economist.com)
  11. Paged Out Issue #7 [pdf] (pagedout.institute)
  12. Earth was born dry until a cosmic collision made it a blue planet (www.sciencedaily.com)
  13. Systems Programming with Zig (www.manning.com)
  14. Alibaba cloud FPGA: the $200 Kintex UltraScale+ (essenceia.github.io)
  15. Toyota runs a car-hacking event to boost security (2024) (toyotatimes.jp)

GitHub Trending(15)

  1. juspay / hyperswitch

    An open source payments switch written in Rust to make payments fast, reliable and affordable

  2. meshery / meshery

    Meshery, the cloud native manager

  3. google / tunix

    A JAX-native LLM Post-Training Library

  4. Stremio / stremio-web

    Stremio - Freedom to Stream

  5. tigerbeetle / tigerbeetle

    The financial transactions database designed for mission critical safety and performance.

  6. paaatrick / playball

    Watch MLB games from the comfort of your own terminal

  7. AykutSarac / jsoncrack.com

    ✨ Innovative and open-source visualization application that transforms various data formats, such as JSON, YAML, XML, CSV and more, into interactive graphs.

  8. simular-ai / Agent-S

    Agent S: an open agentic framework that uses computers like a human

  9. kestra-io / kestra

    Orchestrate everything - from scripts to data, infra, AI, and business - as code, with UI and AI Copilot. Simple. Fast. Scalable.

  10. microsoft / BitNet

    Official inference framework for 1-bit LLMs

  11. Infisical / infisical

    Infisical is the open-source platform for secrets management, PKI, and SSH access.

  12. signalapp / libsignal

    Home to the Signal Protocol as well as other cryptographic primitives which make Signal possible.

  13. MudBlazor / MudBlazor

    Blazor Component Library based on Material Design principles with an emphasis on ease of use and extensibility

  14. pathwaycom / pathway

    Python ETL framework for stream processing, real-time analytics, LLM pipelines, and RAG.

  15. glide-browser / glide

    An extensible and keyboard-focused web browser

Hugging Face(15)

  1. LongCodeZip: Compress Long Context for Code Language Models

    Code generation under long contexts is becoming increasingly critical as Large Language Models (LLMs) are required to reason over extensive information in the codebase. While recent advances enable code LLMs to process long inputs, high API costs and generation latency remain substantial bottlenecks. Existing context pruning techniques, such as LLMLingua, achieve promising results for general text but overlook code-specific structures and dependencies, leading to suboptimal performance in programming tasks. In this paper, we propose LongCodeZip, a novel plug-and-play code compression framework designed specifically for code LLMs. LongCodeZip employs a dual-stage strategy: (1) coarse-grained compression, which identifies and ranks function-level chunks using conditional perplexity with respect to the instruction, retaining only the most relevant functions; and (2) fine-grained compression, which segments retained functions into blocks based on perplexity and selects an optimal subset under an adaptive token budget to maximize relevance. Evaluations across multiple tasks, including code completion, summarization, and question answering, show that LongCodeZip consistently outperforms baseline methods, achieving up to a 5.6x compression ratio without degrading task performance. By effectively reducing context size while preserving essential information, LongCodeZip enables LLMs to better scale to real-world, large-scale code scenarios, advancing the efficiency and capability of code intelligence applications.

  2. Self-Forcing++: Towards Minute-Scale High-Quality Video Generation

    Diffusion models have revolutionized image and video generation, achieving unprecedented visual quality. However, their reliance on transformer architectures incurs prohibitively high computational costs, particularly when extending generation to long videos. Recent work has explored autoregressive formulations for long video generation, typically by distilling from short-horizon bidirectional teachers. Nevertheless, given that teacher models cannot synthesize long videos, the extrapolation of student models beyond their training horizon often leads to pronounced quality degradation, arising from the compounding of errors within the continuous latent space. In this paper, we propose a simple yet effective approach to mitigate quality degradation in long-horizon video generation without requiring supervision from long-video teachers or retraining on long video datasets. Our approach centers on exploiting the rich knowledge of teacher models to provide guidance for the student model through sampled segments drawn from self-generated long videos. Our method maintains temporal consistency while scaling video length by up to 20x beyond teacher's capability, avoiding common issues such as over-exposure and error-accumulation without recomputing overlapping frames like previous methods. When scaling up the computation, our method shows the capability of generating videos up to 4 minutes and 15 seconds, equivalent to 99.9% of the maximum span supported by our base model's position embedding and more than 50x longer than that of our baseline model. Experiments on standard benchmarks and our proposed improved benchmark demonstrate that our approach substantially outperforms baseline methods in both fidelity and consistency. Our long-horizon videos demo can be found at https://self-forcing-plus-plus.github.io/

  3. ExGRPO: Learning to Reason from Experience

    Reinforcement learning from verifiable rewards (RLVR) is an emerging paradigm for improving the reasoning ability of large language models. However, standard on-policy training discards rollout experiences after a single update, leading to computational inefficiency and instability. While prior work on RL has highlighted the benefits of reusing past experience, the role of experience characteristics in shaping learning dynamics of large reasoning models remains underexplored. In this paper, we are the first to investigate what makes a reasoning experience valuable and identify rollout correctness and entropy as effective indicators of experience value. Based on these insights, we propose ExGRPO (Experiential Group Relative Policy Optimization), a framework that organizes and prioritizes valuable experiences, and employs a mixed-policy objective to balance exploration with experience exploitation. Experiments on five backbone models (1.5B-8B parameters) show that ExGRPO consistently improves reasoning performance on mathematical/general benchmarks, with an average gain of +3.5/7.6 points over on-policy RLVR. Moreover, ExGRPO stabilizes training on both stronger and weaker models where on-policy methods fail. These results highlight principled experience management as a key ingredient for efficient and scalable RLVR.

  4. StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions

    3D scene representation methods like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have significantly advanced novel view synthesis. As these methods become prevalent, addressing their vulnerabilities becomes critical. We analyze 3DGS robustness against image-level poisoning attacks and propose a novel density-guided poisoning method. Our method strategically injects Gaussian points into low-density regions identified via Kernel Density Estimation (KDE), embedding viewpoint-dependent illusory objects clearly visible from poisoned views while minimally affecting innocent views. Additionally, we introduce an adaptive noise strategy to disrupt multi-view consistency, further enhancing attack effectiveness. We propose a KDE-based evaluation protocol to assess attack difficulty systematically, enabling objective benchmarking for future research. Extensive experiments demonstrate our method's superior performance compared to state-of-the-art techniques. Project page: https://hentci.github.io/stealthattack/

  5. Interactive Training: Feedback-Driven Neural Network Optimization

    Traditional neural network training typically follows fixed, predefined optimization recipes, lacking the flexibility to dynamically respond to instabilities or emerging training issues. In this paper, we introduce Interactive Training, an open-source framework that enables real-time, feedback-driven intervention during neural network training by human experts or automated AI agents. At its core, Interactive Training uses a control server to mediate communication between users or agents and the ongoing training process, allowing users to dynamically adjust optimizer hyperparameters, training data, and model checkpoints. Through three case studies, we demonstrate that Interactive Training achieves superior training stability, reduced sensitivity to initial hyperparameters, and improved adaptability to evolving user needs, paving the way toward a future training paradigm where AI agents autonomously monitor training logs, proactively resolve instabilities, and optimize training dynamics.

  6. StockBench: Can LLM Agents Trade Stocks Profitably In Real-world Markets?

    Large language models (LLMs) have recently demonstrated strong capabilities as autonomous agents, showing promise in reasoning, tool use, and sequential decision-making. While prior benchmarks have evaluated LLM agents in domains such as software engineering and scientific discovery, the finance domain remains underexplored, despite its direct relevance to economic value and high-stakes decision-making. Existing financial benchmarks primarily test static knowledge through question answering, but they fall short of capturing the dynamic and iterative nature of trading. To address this gap, we introduce StockBench, a contamination-free benchmark designed to evaluate LLM agents in realistic, multi-month stock trading environments. Agents receive daily market signals -- including prices, fundamentals, and news -- and must make sequential buy, sell, or hold decisions. Performance is assessed using financial metrics such as cumulative return, maximum drawdown, and the Sortino ratio. Our evaluation of state-of-the-art proprietary (e.g., GPT-5, Claude-4) and open-weight (e.g., Qwen3, Kimi-K2, GLM-4.5) models shows that while most LLM agents struggle to outperform the simple buy-and-hold baseline, several models demonstrate the potential to deliver higher returns and manage risk more effectively. These findings highlight both the challenges and opportunities in developing LLM-powered financial agents, showing that excelling at static financial knowledge tasks does not necessarily translate into successful trading strategies. We release StockBench as an open-source resource to support reproducibility and advance future research in this domain.

  7. ModernVBERT: Towards Smaller Visual Document Retrievers

    Multimodal embedding models are gaining prevalence, notably for document retrieval as efficient alternatives to text-only pipelines. These models are typically built by finetuning large vision-language decoders (VLMs) with contrastive losses on text-image pairs. In this work, we show that, while cost-efficient, this repurposing approach often bottlenecks retrieval performance. Through controlled experiments, we establish a principled recipe for improving visual document retrieval models. We notably measure the impact of attention masking, image resolution, modality alignment data regimes, and late interaction centered contrastive objectives which emerge as central performance factors. Building on these insights, we release ModernVBERT, a compact 250M-parameter vision-language encoder that outperforms models up to 10 times larger when finetuned on document retrieval tasks. Models and code are made available at https://huggingface.co/ModernVBERT.

  8. RLP: Reinforcement as a Pretraining Objective

    The dominant paradigm for training large reasoning models starts with pre-training using next-token prediction loss on vast amounts of data. Reinforcement learning, while powerful in scaling reasoning, is introduced only as the very last phase of post-training, preceded by supervised fine-tuning. While dominant, is this an optimal way of training? In this paper, we present RLP, an information-driven reinforcement pretraining objective, that brings the core spirit of reinforcement learning -- exploration -- to the last phase of pretraining. The key idea is to treat chain-of-thought as an exploratory action, with rewards computed based on the information gain it provides for predicting future tokens. This training objective essentially encourages the model to think for itself before predicting what comes next, thus teaching an independent thinking behavior earlier in the pretraining. More concretely, the reward signal measures the increase in log-likelihood of the next token when conditioning on both context and a sampled reasoning chain, compared to conditioning on context alone. This approach yields a verifier-free dense reward signal, allowing for efficient training for the full document stream during pretraining. Specifically, RLP reframes reinforcement learning for reasoning as a pretraining objective on ordinary text, bridging the gap between next-token prediction and the emergence of useful chain-of-thought reasoning. Pretraining with RLP on Qwen3-1.7B-Base lifts the overall average across an eight-benchmark math-and-science suite by 19%. With identical post-training, the gains compound, with the largest improvements on reasoning-heavy tasks such as AIME25 and MMLU-Pro. Applying RLP to the hybrid Nemotron-Nano-12B-v2 increases the overall average from 42.81% to 61.32% and raises the average on scientific reasoning by 23%, demonstrating scalability across architectures and model sizes.

  9. The Rogue Scalpel: Activation Steering Compromises LLM Safety

    Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model's hidden states during inference. It is often framed as a precise, interpretable, and potentially safer alternative to fine-tuning. We demonstrate the opposite: steering systematically breaks model alignment safeguards, making it comply with harmful requests. Through extensive experiments on different model families, we show that even steering in a random direction can increase the probability of harmful compliance from 0% to 2-27%. Alarmingly, steering benign features from a sparse autoencoder (SAE), a common source of interpretable directions, increases these rates by a further 2-4%. Finally, we show that combining 20 randomly sampled vectors that jailbreak a single prompt creates a universal attack, significantly increasing harmful compliance on unseen requests. These results challenge the paradigm of safety through interpretability, showing that precise control over model internals does not guarantee precise control over model behavior.

  10. Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks

    Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree-RPO, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 25.9% higher ASR across 10 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.

  11. VOGUE: Guiding Exploration with Visual Uncertainty Improves Multimodal Reasoning

    Reinforcement learning with verifiable rewards (RLVR) improves reasoning in large language models (LLMs) but struggles with exploration, an issue that still persists for multimodal LLMs (MLLMs). Current methods treat the visual input as a fixed, deterministic condition, overlooking a critical source of ambiguity and struggling to build policies robust to plausible visual variations. We introduce VOGUE (Visual Uncertainty Guided Exploration), a novel method that shifts exploration from the output (text) to the input (visual) space. By treating the image as a stochastic context, VOGUE quantifies the policy's sensitivity to visual perturbations using the symmetric KL divergence between a "raw" and "noisy" branch, creating a direct signal for uncertainty-aware exploration. This signal shapes the learning objective via an uncertainty-proportional bonus, which, combined with a token-entropy bonus and an annealed sampling schedule, effectively balances exploration and exploitation. Implemented within GRPO on two model scales (Qwen2.5-VL-3B/7B), VOGUE boosts pass@1 accuracy by an average of 2.6% on three visual math benchmarks and 3.7% on three general-domain reasoning benchmarks, while simultaneously increasing pass@4 performance and mitigating the exploration decay commonly observed in RL fine-tuning. Our work shows that grounding exploration in the inherent uncertainty of visual inputs is an effective strategy for improving multimodal reasoning.

  12. The Unreasonable Effectiveness of Scaling Agents for Computer Use

    Computer-use agents (CUAs) hold promise for automating everyday digital tasks, but their unreliability and high variance hinder their application to long-horizon, complex tasks. We introduce Behavior Best-of-N (bBoN), a method that scales over agents by generating multiple rollouts and selecting among them using behavior narratives that describe the agents' rollouts. It enables both wide exploration and principled trajectory selection, substantially improving robustness and success rates. On OSWorld, our bBoN scaling method establishes a new state of the art (SoTA) at 69.9%, significantly outperforming prior methods and approaching human-level performance at 72%, with comprehensive ablations validating key design choices. We further demonstrate strong generalization results to different operating systems on WindowsAgentArena and AndroidWorld. Crucially, our results highlight the unreasonable effectiveness of scaling CUAs, when you do it right: effective scaling requires structured trajectory understanding and selection, and bBoN provides a practical framework to achieve this.

  13. Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation

    Audio-video generation has often relied on complex multi-stage architectures or sequential synthesis of sound and visuals. We introduce Ovi, a unified paradigm for audio-video generation that models the two modalities as a single generative process. By using blockwise cross-modal fusion of twin-DiT modules, Ovi achieves natural synchronization and removes the need for separate pipelines or post hoc alignment. To facilitate fine-grained multimodal fusion modeling, we initialize an audio tower with an architecture identical to that of a strong pretrained video model. Trained from scratch on hundreds of thousands of hours of raw audio, the audio tower learns to generate realistic sound effects, as well as speech that conveys rich speaker identity and emotion. Fusion is obtained by jointly training the identical video and audio towers via blockwise exchange of timing (via scaled-RoPE embeddings) and semantics (through bidirectional cross-attention) on a vast video corpus. Our model enables cinematic storytelling with natural speech and accurate, context-matched sound effects, producing movie-grade video clips. All the demos, code and model weights are published at https://aaxwaz.github.io/Ovi

  14. RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning

    Fine-grained visual reasoning remains a core challenge for multimodal large language models (MLLMs). The recently introduced ReasonMap highlights this gap by showing that even advanced MLLMs struggle with spatial reasoning in structured and information-rich settings such as transit maps, a task of clear practical and scientific importance. However, standard reinforcement learning (RL) on such tasks is impeded by sparse rewards and unstable optimization. To address this, we first construct ReasonMap-Plus, an extended dataset that introduces dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. Next, we propose RewardMap, a multi-stage RL framework designed to improve both visual understanding and reasoning capabilities of MLLMs. RewardMap incorporates two key designs. First, we introduce a difficulty-aware reward design that incorporates detail rewards, directly tackling the sparse rewards while providing richer supervision. Second, we propose a multi-stage RL scheme that bootstraps training from simple perception to complex reasoning tasks, offering a more effective cold-start strategy than conventional Supervised Fine-Tuning (SFT). Experiments on ReasonMap and ReasonMap-Plus demonstrate that each component of RewardMap contributes to consistent performance gains, while their combination yields the best results. Moreover, models trained with RewardMap achieve an average improvement of 3.47% across 6 benchmarks spanning spatial reasoning, fine-grained visual reasoning, and general tasks beyond transit maps, underscoring enhanced visual understanding and reasoning capabilities.

  15. A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports

    Artificial intelligence is undergoing the paradigm shift from closed language models to interconnected agent systems capable of external perception and information integration. As a representative embodiment, Deep Research Agents (DRAs) systematically exhibit the capabilities for task decomposition, cross-source retrieval, multi-stage reasoning, and structured output, which markedly enhance performance on complex and open-ended tasks. However, existing benchmarks remain deficient in evaluation dimensions, response formatting, and scoring mechanisms, limiting their capacity to assess such systems effectively. This paper introduces a rigorous benchmark and a multidimensional evaluation framework tailored to DRAs and report-style responses. The benchmark comprises 214 expert-curated challenging queries distributed across 10 broad thematic domains, each accompanied by manually constructed reference bundles to support composite evaluation. The framework enables comprehensive evaluation of long-form reports generated by DRAs, incorporating integrated scoring metrics for semantic quality, topical focus, and retrieval trustworthiness. Extensive experimentation confirms the superior performance of mainstream DRAs over web-search-tool-augmented reasoning models, yet reveals considerable scope for further improvement. This study provides a robust foundation for capability assessment, architectural refinement, and paradigm advancement in DRA systems.

Solidot(6)

  1. 英特尔与 AMD 磋商代工芯片

    过去几周,英特尔获得了白宫、英伟达和软银的投资和支持,正与苹果磋商代工芯片。除此之外,其长期的竞争对手 AMD 也是磋商对象。英特尔与 AMD 的谈判处于早期阶段,芯片巨人希望其芯片工厂能代工制造 AMD 的芯片,而 AMD 的芯片此前主要由台积电生产,英特尔工厂目前缺乏制造 AMD 最先进芯片所需的先进技术。类似与苹果的谈判,与 AMD 的谈判也可能不会达成任何协议。

  2. 印度高等法院要求医生书写清晰的处方

    医生手写的处方以天马行空著称,除了药房的药剂师其他人可能完全不知道内容。印度一所高等法院法官在审理一起涉及强奸的案件时阅读了医生写的法医学报告,发现一个字也看不懂。法官 Jasgurpreet Singh Puri 下达命令,称“清晰易读的医疗处方是一项基本权利”。法院要求政府将书写纳入医学院课程,设定两年的时间表推行数字处方。Puri 法官表示在数字处方实现前所有医生都必须用大写字母清楚的写处方。印度医学协会主席 Dilip Bhanushali 称,城市已经推行了数字处方,但小城镇和农村的医生因为忙碌他们的手写处方仍然很潦草。

  3. 黑客声称入侵了 Red Hat 的 GitHub 代码库

    自称 Crimson Collective 的勒索组织声称入侵 Red Hat 的 GitHub 代码库,窃取了近 570GB 的数据。其中包括 800 份 Customer Engagement Reports(CERs),可能包含了客户网络和平台的敏感信息。Red Hat 证实其咨询业务遭遇了安全事故,但拒绝证实黑客的说法。黑客组织在 Telegram 上公布了盗取的 GitHub 代码库的完整目录列表以及 2020-2025 年的 CER 列表。CER 列表中的知名组织包括了美国银行、T-Mobile、AT&T、富达、凯撒、梅奥诊所、​​沃尔玛、Costco、美国海军水面作战中心、FAA 和 众议院等。黑客表示他们尝试联络 Red Hat 提出勒索要求,但只收到一份模板回复,指示他们向其安全团队提交漏洞报告。

  4. 千禧一代癌症发病率在上升

    自 2000 年以来 15-49 岁人群癌症发病率增加了 10%,而老年人口的癌症发病率却略有下降。其中年轻女性的癌症率比同年龄段男性高 83%。美国癌症研究协会(American Association for Cancer Research)会议上发表的一项涉及 15 万人的研究发现,根据血液生物标志物,千禧一代的生物衰老速度看起来比前几代人更快。这种加速现象与肺癌、胃肠道肿瘤和子宫恶性肿瘤等癌症风险增加最高 42% 相关。研究人员将癌症发病率上升与怀孕期间服用的药物、摄入的超加工食品、人造光、轮班工作造成的昼夜节律紊乱,以及化学物质暴露联系起来。

  5. 城市空气检测出致病性酵母菌株

    正如城市居民所知,远离都市,奔赴海边,可以享受别样的风景或体验心灵的重启。发表在 ACS《环境科学与技术快报》上的一项研究为海边之旅又平添了一个新的理由。一项研究发现,城市空气潜藏致病性念珠酵母菌菌株,但在沿海空气样本中却没有发现这些菌株,揭示了其潜在的传播途径。念珠酵母菌是一组常见的微生物,存在于人体皮肤和内脏器官黏膜中,但不会造成危害。但是,在某些情况下,这些菌株可能会过度增殖,并导致阴道酵母菌感染或鹅口疮。已知这些感染可通过直接接触或体液传播。先前的研究发现空气中存在念珠菌 DNA,表明这种酵母菌可以通过空气传播。研究人员连续一整年每个月在香港及其附近的一个面向中国南海的人口稀疏地区收集一次空气样本。他们在 12 份城市空气样本中发现了三种被世界卫生组织归类为真菌病原体的念珠菌:白色念珠菌、近平滑念珠菌和热带念珠菌。而在沿海地区采集的样本中没有检测到念珠菌。这一地域差异让研究人员推测,空气中的酵母菌来源于工业或城市,例如污水处理厂。此外,一些城市空气样本中还含有对常见抗真菌药物具有耐药性的致病性念珠菌菌种。研究人员表示,抗真菌药物的过度使用、城市环境中的重金属等污染物或气温升高均可能是这种耐药性的促成因素。最后,空气中的其中一种念珠菌菌株的基因组成与先前从念珠菌感染者样本中提取的菌株密切相关,这表明空气中的菌株可能具有传染性。研究人员表示,这项研究挑战了长期以来存在的念珠菌主要通过直接接触传播的假设,将念珠菌描述为一种新兴的空气传播病原体。但是还需要开展更多的研究,以调查城市中念珠菌的来源,并充分了解这些空气中颗粒的潜在传染性。

  6. 珍·古道尔去世,享年 91 岁

    著名动物学家、灵长类动物学家和人类学家珍·古道尔(Jane Goodall)去世,享年 91 岁。珍·古道尔以研究野外黑猩猩闻名,被认为是最重要的黑猩猩专家。古道尔于 1960 年在坦桑尼亚贡贝溪(Gombe Stream)国家公园的 Kasakela 黑猩猩社区开始研究黑猩猩的社会和家庭生活,她观察到黑猩猩的行为与人类十分相似。她的发现挑战了当时两大信念:只有人类才能制造和使用工具,黑猩猩是素食主义者。她在研究中与当地黑猩猩建立了紧密联系,成为黑猩猩社区唯一被接纳的人类。她后来投身于环境教育和公益事业,创办了著名民间动物保育机构珍·古道尔研究所。