WEEK · 2026-W11

Weekly Digest — 2026-W11

213 unique stories (2026-03-092026-03-15), aggregated across 8 sources.

Hacker News(42)

  1. Bluesky CEO Jay Graber is stepping down (bsky.social)
  2. Workers report watching Ray-Ban Meta-shot footage of people using the bathroom (arstechnica.com)
  3. JSLinux Now Supports x86_64 (bellard.org)
  4. Florida judge rules red light camera tickets are unconstitutional (cbs12.com)
  5. Building a Procedural Hex Map with Wave Function Collapse (felixturner.github.io)
  6. Jolla on track to ship new phone with Sailfish OS, user-replaceable battery (liliputing.com)
  7. After outages, Amazon to make senior engineers sign off on AI-assisted changes (arstechnica.com)
  8. Agents that run while I sleep (www.claudecodecamp.com)
  9. Yann LeCun raises $1B to build AI that understands the physical world (www.wired.com)
  10. Amazon is holding a mandatory meeting about AI breaking its systems (twitter.com)
  11. Debian decides not to decide on AI-generated contributions (lwn.net)
  12. Tony Hoare has died (blog.computationalcomplexity.org)

GitHub Trending(25)

  1. GoogleCloudPlatform / generative-ai
  2. openclaw / openclaw
  3. 666ghj / MiroFish
  4. karpathy / nanochat
  5. 666ghj / BettaFish
  6. NousResearch / hermes-agent
  7. msitarzewski / agency-agents
  8. promptfoo / promptfoo
  9. virattt / ai-hedge-fund
  10. obra / superpowers
  11. fishaudio / fish-speech
  12. microsoft / BitNet

Product Hunt(42)

  1. SCRAPR

    Turn any website into an API

  2. Kita

    Turn documents into signals for lenders

  3. Wideframe

    AI Coworker for Video Editors

  4. Flowripple

    Easily Trigger workflows from your SaaS

  5. OpenClix

    Agent-driven retention flows for mobile apps.

  6. simply

    ai nutrition app

  7. Crikket

    The open source bug reporting and feedback tool

  8. Book Reading Habit

    Finally read the books you buy

  9. On Demand Ads by beehiv

    Premium sponsors, ready when you are

  10. Agent Skills

    Find skills for Claude Code, Cursor, Copilot & more

  11. VENTUNO Q

    Dual-brain edge AI computer by Qualcomm and Arduino

  12. Shipper 2.0

    Build web/mobile apps, sites and extensions by talking to AI

Hugging Face(25)

  1. Penguin-VL: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders

    Vision Language Model (VLM) development has largely relied on scaling model size, which hinders deployment on compute-constrained mobile and edge devices such as smartphones and robots. In this work, we explore the performance limits of compact (e.g., 2B and 8B) VLMs. We challenge the prevailing practice that state-of-the-art VLMs must rely on vision encoders initialized via massive contrastive pretraining (e.g., CLIP/SigLIP). We identify an objective mismatch: contrastive learning, optimized for discrimination, enforces coarse and category-level invariances that suppress fine-grained visual cues needed for dense captioning and complex VLM reasoning. To address this issue, we present Penguin-VL, whose vision encoder is initialized from a text-only LLM. Our experiments reveal that Penguin-Encoder serves as a superior alternative to traditional contrastive pretraining, unlocking a higher degree of visual fidelity and data efficiency for multimodal understanding. Across various image and video benchmarks, Penguin-VL achieves performance comparable to leading VLMs (e.g., Qwen3-VL) in mathematical reasoning and surpasses them in tasks such as document understanding, visual knowledge, and multi-perspective video understanding. Notably, these gains are achieved with a lightweight architecture, demonstrating that improved visual representation rather than model scaling is the primary driver of performance. Our ablations show that Penguin-Encoder consistently outperforms contrastive-pretrained encoders, preserving fine-grained spatial and temporal cues that are critical for dense perception and complex reasoning. This makes it a strong drop-in alternative for compute-efficient VLMs and enables high performance in resource-constrained settings. Code: https://github.com/tencent-ailab/Penguin-VL

  2. BandPO: Bridging Trust Regions and Ratio Clipping via Probability-Aware Bounds for LLM Reinforcement Learning

    Proximal constraints are fundamental to the stability of the Large Language Model reinforcement learning. While the canonical clipping mechanism in PPO serves as an efficient surrogate for trust regions, we identify a critical bottleneck: fixed bounds strictly constrain the upward update margin of low-probability actions, disproportionately suppressing high-advantage tail strategies and inducing rapid entropy collapse. To address this, we introduce Band-constrained Policy Optimization (BandPO). BandPO replaces canonical clipping with Band, a unified theoretical operator that projects trust regions defined by f-divergences into dynamic, probability-aware clipping intervals. Theoretical analysis confirms that Band effectively resolves this exploration bottleneck. We formulate this mapping as a convex optimization problem, guaranteeing a globally optimal numerical solution while deriving closed-form solutions for specific divergences. Extensive experiments across diverse models and datasets demonstrate that BandPO consistently outperforms canonical clipping and Clip-Higher, while robustly mitigating entropy collapse.

  3. Planning in 8 Tokens: A Compact Discrete Tokenizer for Latent World Model

    World models provide a powerful framework for simulating environment dynamics conditioned on actions or instructions, enabling downstream tasks such as action planning or policy learning. Recent approaches leverage world models as learned simulators, but its application to decision-time planning remains computationally prohibitive for real-time control. A key bottleneck lies in latent representations: conventional tokenizers encode each observation into hundreds of tokens, making planning both slow and resource-intensive. To address this, we propose CompACT, a discrete tokenizer that compresses each observation into as few as 8 tokens, drastically reducing computational cost while preserving essential information for planning. An action-conditioned world model that occupies CompACT tokenizer achieves competitive planning performance with orders-of-magnitude faster planning, offering a practical step toward real-world deployment of world models.

  4. WildActor: Unconstrained Identity-Preserving Video Generation

    Production-ready human video generation requires digital actors to maintain strictly consistent full-body identities across dynamic shots, viewpoints and motions, a setting that remains challenging for existing methods. Prior methods often suffer from face-centric behavior that neglects body-level consistency, or produce copy-paste artifacts where subjects appear rigid due to pose locking. We present Actor-18M, a large-scale human video dataset designed to capture identity consistency under unconstrained viewpoints and environments. Actor-18M comprises 1.6M videos with 18M corresponding human images, covering both arbitrary views and canonical three-view representations. Leveraging Actor-18M, we propose WildActor, a framework for any-view conditioned human video generation. We introduce an Asymmetric Identity-Preserving Attention mechanism coupled with a Viewpoint-Adaptive Monte Carlo Sampling strategy that iteratively re-weights reference conditions by marginal utility for balanced manifold coverage. Evaluated on the proposed Actor-Bench, WildActor consistently preserves body identity under diverse shot compositions, large viewpoint transitions, and substantial motions, surpassing existing methods in these challenging settings.

  5. Progressive Residual Warmup for Language Model Pretraining

    Transformer architectures serve as the backbone for most modern Large Language Models, therefore their pretraining stability and convergence speed are of central concern. Motivated by the logical dependency of sequentially stacked layers, we propose Progressive Residual Warmup (ProRes) for language model pretraining. ProRes implements an "early layer learns first" philosophy by multiplying each layer's residual with a scalar that gradually warms up from 0 to 1, with deeper layers taking longer warmup steps. In this way, deeper layers wait for early layers to settle into a more stable regime before contributing to learning. We demonstrate the effectiveness of ProRes through pretraining experiments across various model scales, as well as normalization and initialization schemes. Comprehensive analysis shows that ProRes not only stabilizes pretraining but also introduces a unique optimization trajectory, leading to faster convergence, stronger generalization and better downstream performance. Our code is available at https://github.com/dandingsky/ProRes.

  6. RoboMME: Benchmarking and Understanding Memory for Robotic Generalist Policies

    Memory is critical for long-horizon and history-dependent robotic manipulation. Such tasks often involve counting repeated actions or manipulating objects that become temporarily occluded. Recent vision-language-action (VLA) models have begun to incorporate memory mechanisms; however, their evaluations remain confined to narrow, non-standardized settings. This limits their systematic understanding, comparison, and progress measurement. To address these challenges, we introduce RoboMME: a large-scale standardized benchmark for evaluating and advancing VLA models in long-horizon, history-dependent scenarios. Our benchmark comprises 16 manipulation tasks constructed under a carefully designed taxonomy that evaluates temporal, spatial, object, and procedural memory. We further develop a suite of 14 memory-augmented VLA variants built on the π0.5 backbone to systematically explore different memory representations across multiple integration strategies. Experimental results show that the effectiveness of memory representations is highly task-dependent, with each design offering distinct advantages and limitations across different tasks. Videos and code can be found at our website https://robomme.github.io.

  7. Lost in Stories: Consistency Bugs in Long Story Generation by LLMs

    What happens when a storyteller forgets its own story? Large Language Models (LLMs) can now generate narratives spanning tens of thousands of words, but they often fail to maintain consistency throughout. When generating long-form narratives, these models can contradict their own established facts, character traits, and world rules. Existing story generation benchmarks focus mainly on plot quality and fluency, leaving consistency errors largely unexplored. To address this gap, we present ConStory-Bench, a benchmark designed to evaluate narrative consistency in long-form story generation. It contains 2,000 prompts across four task scenarios and defines a taxonomy of five error categories with 19 fine-grained subtypes. We also develop ConStory-Checker, an automated pipeline that detects contradictions and grounds each judgment in explicit textual evidence. Evaluating a range of LLMs through five research questions, we find that consistency errors show clear tendencies: they are most common in factual and temporal dimensions, tend to appear around the middle of narratives, occur in text segments with higher token-level entropy, and certain error types tend to co-occur. These findings can inform future efforts to improve consistency in long-form narrative generation. Our project page is available at https://picrew.github.io/constory-bench.github.io/.

  8. Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence

    The pursuit of spatial intelligence fundamentally relies on access to large-scale, fine-grained 3D data. However, existing approaches predominantly construct spatial understanding benchmarks by generating question-answer (QA) pairs from a limited number of manually annotated datasets, rather than systematically annotating new large-scale 3D scenes from raw web data. As a result, their scalability is severely constrained, and model performance is further hindered by domain gaps inherent in these narrowly curated datasets. In this work, we propose Holi-Spatial, the first fully automated, large-scale, spatially-aware multimodal dataset, constructed from raw video inputs without human intervention, using the proposed data curation pipeline. Holi-Spatial supports multi-level spatial supervision, ranging from geometrically accurate 3D Gaussian Splatting (3DGS) reconstructions with rendered depth maps to object-level and relational semantic annotations, together with corresponding spatial Question-Answer (QA) pairs. Following a principled and systematic pipeline, we further construct Holi-Spatial-4M, the first large-scale, high-quality 3D semantic dataset, containing 12K optimized 3DGS scenes, 1.3M 2D masks, 320K 3D bounding boxes, 320K instance captions, 1.2M 3D grounding instances, and 1.2M spatial QA pairs spanning diverse geometric, relational, and semantic reasoning tasks. Holi-Spatial demonstrates exceptional performance in data curation quality, significantly outperforming existing feed-forward and per-scene optimized methods on datasets such as ScanNet, ScanNet++, and DL3DV. Furthermore, fine-tuning Vision-Language Models (VLMs) on spatial reasoning tasks using this dataset has also led to substantial improvements in model performance.

  9. LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory

    Feedforward geometric foundation models achieve strong short-window reconstruction, yet scaling them to minutes-long videos is bottlenecked by quadratic attention complexity or limited effective memory in recurrent designs. We present LoGeR (Long-context Geometric Reconstruction), a novel architecture that scales dense 3D reconstruction to extremely long sequences without post-optimization. LoGeR processes video streams in chunks, leveraging strong bidirectional priors for high-fidelity intra-chunk reasoning. To manage the critical challenge of coherence across chunk boundaries, we propose a learning-based hybrid memory module. This dual-component system combines a parametric Test-Time Training (TTT) memory to anchor the global coordinate frame and prevent scale drift, alongside a non-parametric Sliding Window Attention (SWA) mechanism to preserve uncompressed context for high-precision adjacent alignment. Remarkably, this memory architecture enables LoGeR to be trained on sequences of 128 frames, and generalize up to thousands of frames during inference. Evaluated across standard benchmarks and a newly repurposed VBR dataset with sequences of up to 19k frames, LoGeR substantially outperforms prior state-of-the-art feedforward methods--reducing ATE on KITTI by over 74%--and achieves robust, globally consistent reconstruction over unprecedented horizons.

  10. Believe Your Model: Distribution-Guided Confidence Calibration

    Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the overlap from the perspective of distribution itself, we propose SelfStepConf, which uses step-level confidence to dynamically adjust inference process, increasing the separation between the two distributions to improve the reliability of confidences in voting. Experiments across 16 models and 5 benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches.

  11. How Far Can Unsupervised RLVR Scale LLM Training?

    Unsupervised reinforcement learning with verifiable rewards (URLVR) offers a pathway to scale LLM training beyond the supervision bottleneck by deriving rewards without ground truth labels. Recent works leverage model intrinsic signals, showing promising early gains, yet their potential and limitations remain unclear. In this work, we revisit URLVR and provide a comprehensive analysis spanning taxonomy, theory and extensive experiments. We first classify URLVR methods into intrinsic versus external based on reward sources, then establish a unified theoretical framework revealing that all intrinsic methods converge toward sharpening the model's initial distribution This sharpening mechanism succeeds when initial confidence aligns with correctness but fails catastrophically when misaligned. Through systematic experiments, we show intrinsic rewards consistently follow a rise-then-fall pattern across methods, with collapse timing determined by model prior rather than engineering choices. Despite these scaling limits, we find intrinsic rewards remain valuable in test-time training on small datasets, and propose Model Collapse Step to measure model prior, serving as a practical indicator for RL trainability. Finally, we explore external reward methods that ground verification in computational asymmetries, showing preliminary evidence they may escape the confidence-correctness ceiling. Our findings chart boundaries for intrinsic URLVR while motivating paths toward scalable alternatives.

  12. CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing

    Unified diffusion editors often rely on a fixed, shared backbone for diverse tasks, suffering from task interference and poor adaptation to heterogeneous demands (e.g., local vs global, semantic vs photometric). In particular, prevalent ControlNet and OmniControl variants combine multiple conditioning signals (e.g., text, mask, reference) via static concatenation or additive adapters which cannot dynamically prioritize or suppress conflicting modalities, thus resulting in artifacts like color bleeding across mask boundaries, identity or style drift, and unpredictable behavior under multi-condition inputs. To address this, we propose Condition-Aware Routing of Experts (CARE-Edit) that aligns model computation with specific editing competencies. At its core, a lightweight latent-attention router assigns encoded diffusion tokens to four specialized experts--Text, Mask, Reference, and Base--based on multi-modal conditions and diffusion timesteps: (i) a Mask Repaint module first refines coarse user-defined masks for precise spatial guidance; (ii) the router applies sparse top-K selection to dynamically allocate computation to the most relevant experts; (iii) a Latent Mixture module subsequently fuses expert outputs, coherently integrating semantic, spatial, and stylistic information to the base images. Experiments validate CARE-Edit's strong performance on contextual editing tasks, including erasure, replacement, text-driven edits, and style transfer. Empirical analysis further reveals task-specific behavior of specialized experts, showcasing the importance of dynamic, condition-aware processing to mitigate multi-condition conflicts.

Techmeme(42)

  1. HPE reports Q1 revenue up 18% YoY to $9.3B, vs. $9.37B est., Cloud and AI revenue down 2.7% YoY to $6.3B but reports an AI server backlog of $5B (Dina Bass/Bloomberg)

    Dina Bass / Bloomberg : HPE reports Q1 revenue up 18% YoY to $9.3B, vs. $9.37B est., Cloud and AI revenue down 2.7% YoY to $6.3B but reports an AI server backlog of $5B —  Hewlett Packard Enterprise Co. gave an outlook for revenue in the current quarter that exceeded analysts' estimates, a sign the company …

  2. Filing: Anthropic says it had $5B+ in all-time revenue since 2023 and could lose billions after clients paused deal talks due to supply-chain risk designation (Paresh Dave/Wired)

    Paresh Dave / Wired : Filing: Anthropic says it had $5B+ in all-time revenue since 2023 and could lose billions after clients paused deal talks due to supply-chain risk designation —  Executives at the AI startup say companies paused deal talks after the Trump administration labeled it a supply-chain risk …

  3. More than 30 staffers from OpenAI and Google, including DeepMind chief scientist Jeff Dean, file an amicus brief in support of Anthropic in its fight with DOD (Maxwell Zeff/Wired)

    Maxwell Zeff / Wired : More than 30 staffers from OpenAI and Google, including DeepMind chief scientist Jeff Dean, file an amicus brief in support of Anthropic in its fight with DOD —  Google DeepMind chief scientist Jeff Dean is among the AI researchers and engineers rushing to Anthropic's defense.

  4. MacBook Pro 16" (M5 Max) review: "Super core" architecture and 40-core GPU deliver beastly performance, but still retains a five-year-old design (Brian Westover/PCMag)

    Brian Westover / PCMag : MacBook Pro 16" (M5 Max) review: “Super core” architecture and 40-core GPU deliver beastly performance, but still retains a five-year-old design —  Outstanding  —  THE BOTTOM LINE  —  Pros & Cons  — “Super core” architecture delivers incredible performance

  5. Apple Studio Display XDR review: great reference picture modes, much improved camera, and 120Hz support on newer Macs, but expensive at $3300 and only for Macs (John Higgins/The Verge)

    John Higgins / The Verge : Apple Studio Display XDR review: great reference picture modes, much improved camera, and 120Hz support on newer Macs, but expensive at $3300 and only for Macs —  It's been almost exactly four years since Apple released the 5K Studio Display that so many wanted, even if it didn't really deliver as a high-end display.

  6. SoftBank's stock is down ~48% since Nov. 3, as scrutiny into the scale of its OpenAI involvement grows; on Monday, SoftBank fell 9.8% on Stargate delay reports (Financial Times)

    Financial Times : SoftBank's stock is down ~48% since Nov. 3, as scrutiny into the scale of its OpenAI involvement grows; on Monday, SoftBank fell 9.8% on Stargate delay reports —  Japanese group has suffered from recent share falls and a negative outlook from rating agency S&P

  7. Sandbar, which is developing the Stream ring, a $249+ AI-powered wearable that transcribes audio notes, raised a $23M Series A, bringing total funding to $36M (Alex Konrad/Upstarts Media)

    Alex Konrad / Upstarts Media : Sandbar, which is developing the Stream ring, a $249+ AI-powered wearable that transcribes audio notes, raised a $23M Series A, bringing total funding to $36M —  Sandbar CEO Mina Fahmi has raised $23M in Series A funding to fuel the launch of his AI note-taking ring, called Stream.

  8. Niantic Spatial partners with Coco Robotics to integrate a visual positioning system trained on data from Pokemon Go and Ingress into a fleet of delivery robots (Will Douglas Heaven/MIT Technology Review)

    Will Douglas Heaven / MIT Technology Review : Niantic Spatial partners with Coco Robotics to integrate a visual positioning system trained on data from Pokemon Go and Ingress into a fleet of delivery robots —  Pokémon Go was the world's first augmented-reality megahit.  Released in 2016 by the Google spinout Niantic …

  9. Leaked memo: a top Senate administrator gave aides the green light to use ChatGPT, Gemini, and Copilot for official Senate work, including preparing briefings (Catie Edmondson/New York Times)

    Catie Edmondson / New York Times : Leaked memo: a top Senate administrator gave aides the green light to use ChatGPT, Gemini, and Copilot for official Senate work, including preparing briefings —  New guidelines said Senate aides could use A.I. tools for official work, including research, drafting and editing documents …

  10. Internal doc: the State Department moved its internal chatbot from Claude Sonnet 4.5 to GPT-4.1, following Trump's directive to cancel Anthropic contracts (Nextgov/FCW)

    Nextgov/FCW : Internal doc: the State Department moved its internal chatbot from Claude Sonnet 4.5 to GPT-4.1, following Trump's directive to cancel Anthropic contracts —  The agency moved its chatbot to operate on OpenAI's GPT 4.1, internal document shows.  —  The State Department has shifted …

  11. Filing: Microsoft files an amicus brief in support of Anthropic and advocates for a temporary restraining order to block the DOD's supply chain risk designation (Ashley Capoot/CNBC)

    Ashley Capoot / CNBC : Filing: Microsoft files an amicus brief in support of Anthropic and advocates for a temporary restraining order to block the DOD's supply chain risk designation —  Microsoft threw its support behind Anthropic on Tuesday, saying a judge should issue a restraining order that would block …

  12. Amazon expands its healthcare AI assistant Health AI to its website and app; it was previously only available on the app for One Medical (Aisha Malik/TechCrunch)

    Aisha Malik / TechCrunch : Amazon expands its healthcare AI assistant Health AI to its website and app; it was previously only available on the app for One Medical —  Amazon announced on Tuesday that it's expanding access to its healthcare AI assistant to its website and app.  The assistant, called Health AI …

Solidot(37)

  1. 一百万颗卫星会如何影响天空?

    SpaceX 计划向太空发射 1 百万颗卫星,理由是建造太空数据中心。暂不讨论太空数据中心的可行性(没什么可行性),这一百万颗卫星会如何影响我们每一个人?SpaceX 已经向 FCC 递交了发射提议,对该计划的公众意见征集于上周五结束,逾千条公众意见绝大多数持反对立场,要求 FCC 停止推进该计划。SpaceX 已经向地球轨道发射了上万颗卫星,一百万颗则是已有数量的一百倍。SpaceX 平均每周发射两次,它的卫星不断上天,也不断坠落。越来越多的证据表明,火箭发射会将污染物排放到空气中,影响大气层,造成潜在的温室效应,可能加剧对臭氧层的威胁。如果 SpaceX 的一百万颗卫星全部脱离轨道,意味着每三分钟就有一颗卫星重返大气层。轨道上的卫星越多,发生碰撞的可能性也越大。卫星也会影响天文观测,该公司一直试图与国际天文学联合会合作减少卫星对天文观测的影响,但一百万颗完全不同的量级,天文学家震惊不已。

  2. FBI 通过 Proton Mail 识别抗议者身份

    FBI 通过瑞士邮件服务商 Proton Mail 提供的信息识别了亚特兰大抗议组织 Defend the Atlanta Forest/Stop Cop City 领导人的身份。Proton Mail 坚称它必须遵守瑞士的法律。Stop Cop City 官方 FB 账号使用的邮箱是 defendtheatlantaforest@protonmail.com,FBI 援引《司法互助条约(Mutual Legal Assistance Treaty 或 MLAT)》,请求瑞士司法部向 Proton Mail 索取信息。瑞士与逾 30 个国家签订了 MLAT。Proton 向瑞士司法机构提供了信息,然后由瑞士转交给了 FBI。Proton AG 通讯主管 Edward Shone 表示,该公司没有直接向 FBI 提供信息,相关信息是 FBI 通过瑞士司法部获取到的。

  3. 卫星揭示北美和非洲桥梁面临稳定性风险

    根据发表在《Nature Communication》期刊上的一项研究,休斯顿大学等机构的研究人员利用卫星分析了全世界 744 座桥梁,评估其状况。研究结果显示,北美桥梁的状况普遍最差,其次是非洲桥梁。研究分析的很多桥梁已接近其设计使用寿命的上限。北美桥梁的建设高峰是在 1960 年代,很多已建成数十年,接近或超过其最初的设计寿命。研究人员利用名为多时相干涉合成孔径雷达(Multi-temporal InSAR, MTInSAR)的卫星遥感方法去监测桥梁结构中的微小位移。

  4. 耳鸣与睡眠密切相关

    幻影知觉(phantom percept)是大脑愚弄我们以为看到、听到、感觉到或闻到了实际上不存在的东西。耳鸣是最常见的幻影知觉,尽管有很多假说,但至今未找到确切的病因或疗法,全世界有 15% 的人口受到耳鸣的困扰。很多耳鸣患者都表示睡眠质量差,睡眠模式紊乱,但耳鸣与睡眠这一重要生理功能之间的潜在关联直到最近才被人所认识。牛津大学的神经学家提出,深度睡眠或非快速眼动睡眠(non-REM)期间出现的大幅度自发性脑电波可能抑制导致耳鸣的脑电活动。雪貂实验显示,耳鸣雪貂在进入 non-REM 睡眠后过度活跃的脑电活动会减弱。研究结果表明,深度睡眠可能有助于缓解耳鸣,有可能揭示了大脑调节异常活动的自然机制。

  5. 研究人员模拟月球土壤种植收获鹰嘴豆

    根据发表在《Scientific Reports》期刊上的一项研究,德州农工的研究人员模拟月球土壤种植收获了鹰嘴豆,但鹰嘴豆的食用安全性尚未确定。月球土壤贫瘠,缺乏营养元素,富含重金属,为克服这些问题,研究人员利用蚯蚓通过废弃物产生的堆肥去提供必需的微生物和营养物质,使用共生真菌 Arbuscular Mycorrhizal Fungi 去促进植物生长和减少对有毒重金属的吸收。结果显示添加堆肥和共生真菌的模拟月壤混合物能像普通地球土壤那样种植和收获鹰嘴豆。研究人员接下来将分析鹰嘴豆的营养成分,检测是否含有重金属,确保人类的食用安全。

  6. 新法律要求成人网站验证澳大利亚人年龄

    在禁止 16 岁以下儿童使用社交媒体三个月后,澳大利亚新法律要求有成人内容的网站验证该国访客的年龄,确保其年满 18 岁,违规者将面临罚款。澳大利亚网络安全监管机构表示,此举旨在保护儿童免受有害内容的侵害。eSafety 专员 Julie Inman Grant 表示,我们不允许儿童进入酒吧、酒专卖店、成人用品店或赌场,但对于网络空间,却没有此类保障措施。平台可能需要使用面部识别技术、数字身份和信用卡信息去验证访客。根据新规,搜索引擎、应用商店、社交媒体和游戏平台、色情网站以及包括聊天机器人在内的 AI 系统必须采取切实有效的措施,防止儿童接触成人内容。

  7. 很多国际游戏开发者计划不参加今年的 GDC

    数万游戏开发者和制作人本周将齐聚旧金山,参加为期一周的游戏开发者大会(GDC),这是 1988 年以来的传统,但今年的 GDC 将有许多国际游戏开发者缺席,原因是他们觉得美国不再安全,无论 GDC 对他们的工作和职业发展有多么重要,他们不想冒不必要的风险。Godot 基金会执行董事 Emilio Coppola 称他认识的非美国人中没有一个人计划参加 GDC。独立游戏工作室 Le Cabinet du Savoir 的创意总监 Nazih Fares 表示不愿意亲身经历被边检逮捕。去年参加 GDC 的游戏开发者采取了额外的多重安保措施,他们很多人表示为避免麻烦而不想参加今年的 GDC。

  8. Meta 称上传盗版电子书属于合理使用

    为训练大模型,社交巨人 Meta 从 Z-Library 和 LibGen 等影子图书馆平台通过 BitTorrent 下载了逾百 TB 的电子书。在正在进行的由图书作者提起的诉讼中,Meta 律师辩称,通过 BitTorrent 将盗版电子书上传给陌生人属于合理使用。Meta 还强调,这些数据帮助美国确立了其在全球 AI 领域的领先地位。法庭去年裁决,使用盗版电子书训练大模型属于合理使用,但 Meta 仍然需要为通过 BitTorrent 下载和分享电子书的行为承担责任。图书作者认为,Meta 参与了侵权行为。Meta 在上周递交的补充书面询问中表示,在下载 BT 文件过程中共享文件也属于合理使用,理由是这是 BT 协议的固有特性,上传不是选择而是技术本身的工作方式。Meta 还辩称,使用 BitTorrent 共享文件是获取这些宝贵(但盗版)数据的必要手段。以 Anna’s Archive 为例,这些数据集只能通过 BT 下载获取,因此 BitTorrent 是唯一的选择。

  9. 为什么高处坠落的猫总是四脚着地?

    从高处坠落的猫总能四脚着地。科学家一直在争论背后的机制,他们提出了四种假说:收腿翻转(tuck and turn)模型认为猫收起一组爪子以便能旋转身体的不同部位;麦克斯韦(James Clerk Maxwell)提出的下落花样滑冰运动员(falling figure skate)模型认为猫通过收回或伸展爪子调整其角动量;弯曲扭转(bend and twist)模型认为猫在腰部弯曲使身体的两部分产生反向旋转;螺旋尾巴(propeller tail)模型认为猫通过像螺旋桨一样旋转尾巴去反转身体的旋转方向。根据发表在《The Anatomical Record》期刊上的最新研究,日本科学家从五具捐赠的猫尸上取出脊椎,保留韧带和椎间盘,将胸椎和腰椎部分分离,然后将其放入一具扭转装置,观察扭转它们所需的力以及各部分扭转的极限角度。他们还将两只活猫各自抛八次,拍摄了猫在自由落体下的高速照片。结果显示,上段脊椎的扭转角度大于下段脊椎,在扭转角度约 50 度时存在一个“最佳点”,在该点扭转时几乎没有阻力。下段脊椎则不存在这个点,这为“收腿翻转”假说提供了证据。高速摄影也观察到了猫的收腿翻转动作。研究人员还观察到猫总是倾向于右转,可能是内脏器官的不对称分布使其更容易向右转。

  10. 数据中心成为攻击基础设施的目标

    科技行业常把“云”说成是某种抽象且遥不可及的东西。但云运行在数据中心,而数据中心有地址,这个地址可能会遭到无人机袭击。上周亚马逊 AWS 运营的三个数据中心遭到袭击,其中两个位于阿联酋,一个位于巴林。袭击导致设施离线,引发了整个地区银行、支付、外卖应用和企业软件等服务的中断。此次袭击是数据中心首次成为攻击目标。专家认为这肯定不会是最后一次。数据中心正迅速成为重要战略资产,同时也成为易受攻击的目标。

  11. 瑞士通过修宪保障使用现金的权利

    瑞士选民以压倒性多数通过了一项宪法修正案,保障民众使用现金的权利。欧洲除了瑞士,匈牙利、斯洛伐克和斯洛文尼亚等国也都将保障现金使用权利写入宪法。官方统计结果显示,73.4% 的选民支持该宪法修正案。该修正案由政府提出,旨在反击“瑞士自由运动(Swiss Freedom Movement)”组织提出的类似倡议。瑞士自由运动发起了保护现金的倡议,收集了逾 10 万个签名,最终引发了全民公投。由于政府认为该组织提出的部分修正案过于激进,最终该倡议仅获得 46% 的投票支持。

  12. 调查发现三分之一美国人认为末日将在其有生之年来临

    美国相信末日来临的人并非少数。根据《Journal of Personality and Social Psychology》上发表的一篇报告,研究人员调查了 1409 名不同信仰的美国人,结果显示三分之一相信末日将在其有生之年来临。一部分人认为末日是人类引发的,还有部分人认为末日是由神或超自然力量引发的。相信末日临近且人类是罪魁祸首的人,感知到的风险更大,也更支持采取更极端的行动应对威胁。然而相信神控制世界末日的人则不太可能支持采取预防措施。