OrangeBot.AI Digest — 2025-07-30
65 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Vibe code is legacy code (blog.val.town)
- The hype is the product (rys.io)
- Most Illinois farmland is not owned by farmers (www.chicagotribune.com)
- Fast (www.catherinejue.com)
- Australia widens teen social media ban to YouTube, scraps exemption (www.reuters.com)
- Optician Sans – A free font based on historical eye charts and optotypes (optician-sans.com)
- Crush: Glamourous AI coding agent for your favourite terminal (github.com)
- Big Tech Killed the Golden Age of Programming (www.taylor.gl)
- Writing memory efficient C structs (tomscheers.github.io)
- Try the Mosquito Bucket of Death (www.energyvanguard.com)
- Blog series on creating an OS in Rust (os.phil-opp.com)
- The HTML Hobbyist (www.htmlhobbyist.com)
- I launched 17 side projects. Result? I'm rich in expired domains
- Problem solving often a matter of cooking up an appropriate Markov chain (2007) (math.uchicago.edu)
- Our $100M Series B (oxide.computer)
GitHub Trending(11)
- 9001 / copyparty
Portable file server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++ all in one file, no deps
- roboflow / supervision
We write your reusable computer vision tools. 💜
- outline / outline
The fastest knowledge base for growing teams. Beautiful, realtime collaborative, feature packed, and markdown compatible.
- linshenkx / prompt-optimizer
一款提示词优化器,助力于编写高质量的提示词
- tldr-pages / tldr
📚 Collaborative cheatsheets for console commands
- mikf / gallery-dl
Command-line program to download image galleries and collections from several image hosting sites
- sindresorhus / awesome
😎 Awesome lists about all kinds of interesting topics
- ashishpatel26 / 500-AI-Agents-Projects
The 500 AI Agents Projects is a curated collection of AI agent use cases across various industries. It showcases practical applications and provides links to open-source projects for implementation, illustrating how AI agents are transforming sectors such as healthcare, finance, education, retail, and more.
- cloudwego / eino
The ultimate LLM/AI application development framework in Golang.
- mattermost-community / focalboard
Focalboard is an open source, self-hosted alternative to Trello, Notion, and Asana.
- LMCache / LMCache
Supercharge Your LLM with the Fastest KV Cache Layer
Product Hunt(15)
- Rustic AI
Your visual AI design editor
- Droidrun
Give AI native control of physical & virtual phones.
- ToolSDK.ai
5000+ MCP Servers AI Tools, 1 Lines of Code
- ChatGPT study mode
A new way to learn with ChatGPT
- Candlestick AI
Invest in custom portfolio ideas, researched by AI agents
- Meet-Ting
Free AI assistant for email scheduling in early access
- Ideogram Character
Persistent character model that works with a single image
- Setter AI
Turn website visitors into booked calls
- Cubox AI 3.0
Library Insight for everything you’ve saved.
- OpenWispr
100% local open source AI speech-to-text model
- Lock-in
your personal focus assistant
- ClueoMCP
The open protocol for AI personality injection
- DesignQA
The fastest way to report design bugs
- OnlyCheat
AI desktop assistant, it's silent, invisible and intelligent
- QuickSheet
Instantly create and edit spreadsheets from your menu bar
Hugging Face(9)
- HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels
Creating immersive and playable 3D worlds from texts or images remains a fundamental challenge in computer vision and graphics. Existing world generation approaches typically fall into two categories: video-based methods that offer rich diversity but lack 3D consistency and rendering efficiency, and 3D-based methods that provide geometric consistency but struggle with limited training data and memory-inefficient representations. To address these limitations, we present HunyuanWorld 1.0, a novel framework that combines the best of both worlds for generating immersive, explorable, and interactive 3D scenes from text and image conditions. Our approach features three key advantages: 1) 360{\deg} immersive experiences via panoramic world proxies; 2) mesh export capabilities for seamless compatibility with existing computer graphics pipelines; 3) disentangled object representations for augmented interactivity. The core of our framework is a semantically layered 3D mesh representation that leverages panoramic images as 360{\deg} world proxies for semantic-aware world decomposition and reconstruction, enabling the generation of diverse 3D worlds. Extensive experiments demonstrate that our method achieves state-of-the-art performance in generating coherent, explorable, and interactive 3D worlds while enabling versatile applications in virtual reality, physical simulation, game development, and interactive content creation.
- X-Omni: Reinforcement Learning Makes Discrete Autoregressive Image Generative Models Great Again
Numerous efforts have been made to extend the ``next token prediction'' paradigm to visual contents, aiming to create a unified approach for both image generation and understanding. Nevertheless, attempts to generate images through autoregressive modeling with discrete tokens have been plagued by issues such as low visual fidelity, distorted outputs, and failure to adhere to complex instructions when rendering intricate details. These shortcomings are likely attributed to cumulative errors during autoregressive inference or information loss incurred during the discretization process. Probably due to this challenge, recent research has increasingly shifted toward jointly training image generation with diffusion objectives and language generation with autoregressive objectives, moving away from unified modeling approaches. In this work, we demonstrate that reinforcement learning can effectively mitigate artifacts and largely enhance the generation quality of a discrete autoregressive modeling method, thereby enabling seamless integration of image and language generation. Our framework comprises a semantic image tokenizer, a unified autoregressive model for both language and images, and an offline diffusion decoder for image generation, termed X-Omni. X-Omni achieves state-of-the-art performance in image generation tasks using a 7B language model, producing images with high aesthetic quality while exhibiting strong capabilities in following instructions and rendering long texts.
- ChemDFM-R: An Chemical Reasoner LLM Enhanced with Atomized Chemical Knowledge
While large language models (LLMs) have achieved impressive progress, their application in scientific domains such as chemistry remains hindered by shallow domain understanding and limited reasoning capabilities. In this work, we focus on the specific field of chemistry and develop a Chemical Reasoner LLM, ChemDFM-R. We first construct a comprehensive dataset of atomized knowledge points to enhance the model's understanding of the fundamental principles and logical structure of chemistry. Then, we propose a mix-sourced distillation strategy that integrates expert-curated knowledge with general-domain reasoning skills, followed by domain-specific reinforcement learning to enhance chemical reasoning. Experiments on diverse chemical benchmarks demonstrate that ChemDFM-R achieves state-of-the-art performance while providing interpretable, rationale-driven outputs. Further case studies illustrate how explicit reasoning chains significantly improve the reliability, transparency, and practical utility of the model in real-world human-AI collaboration scenarios.
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
The exponential growth in demand for GPU computing resources, driven by the rapid advancement of Large Language Models, has created an urgent need for automated CUDA optimization strategies. While recent advances in LLMs show promise for code generation, current SOTA models (e.g. R1, o1) achieve low success rates in improving CUDA speed. In this paper, we introduce CUDA-L1, an automated reinforcement learning framework for CUDA optimization. CUDA-L1 achieves performance improvements on the CUDA optimization task: trained on NVIDIA A100, it delivers an average speedup of x17.7 across all 250 CUDA kernels of KernelBench, with peak speedups reaching x449. Furthermore, the model also demonstrates excellent portability across GPU architectures, achieving average speedups of x17.8 on H100, x19.0 on RTX 3090, x16.5 on L40, x14.7 on H800, and x13.9 on H20 despite being optimized specifically for A100. Beyond these benchmark results, CUDA-L1 demonstrates several remarkable properties: 1) Discovers a variety of CUDA optimization techniques and learns to combine them strategically to achieve optimal performance; 2) Uncovers fundamental principles of CUDA optimization; 3) Identifies non-obvious performance bottlenecks and rejects seemingly beneficial optimizations that harm performance. The capabilities of CUDA-L1 demonstrate that reinforcement learning can transform an initially poor-performing LLM into an effective CUDA optimizer through speedup-based reward signals alone, without human expertise or domain knowledge. More importantly, the trained RL model extend the acquired reasoning abilities to new kernels. This paradigm opens possibilities for automated optimization of CUDA operations, and holds promise to substantially promote GPU efficiency and alleviate the rising pressure on GPU computing resources.
- MaPPO: Maximum a Posteriori Preference Optimization with Prior Knowledge
As the era of large language models (LLMs) on behalf of users unfolds, Preference Optimization (PO) methods have become a central approach to aligning LLMs with human preferences and improving performance. We propose Maximum a Posteriori Preference Optimization (MaPPO), a framework for learning from preferences that explicitly incorporates prior reward knowledge into the optimization objective. While existing methods such as Direct Preference Optimization (DPO) and its variants treat preference learning as a Maximum Likelihood Estimation (MLE) problem, MaPPO extends this paradigm by integrating prior reward estimates into a principled Maximum a Posteriori (MaP) objective. This not only generalizes DPO and its variants, but also enhances alignment by mitigating the oversimplified binary classification of responses. More importantly, MaPPO introduces no additional hyperparameter, and supports preference optimization in both offline and online settings. In addition, MaPPO can be used as a plugin with consistent improvement on DPO variants, including widely used SimPO, IPO, and CPO. Extensive empirical evaluations of different model sizes and model series on three standard benchmarks, including MT-Bench, AlpacaEval 2.0, and Arena-Hard, demonstrate consistent improvements in alignment performance without sacrificing computational efficiency.
- AnimalClue: Recognizing Animals by their Traces
Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring. To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces. Our dataset and code are available at https://dahlian00.github.io/AnimalCluePage/
- MOVE: Motion-Guided Few-Shot Video Object Segmentation
This work addresses motion-guided few-shot video object segmentation (FSVOS), which aims to segment dynamic objects in videos based on a few annotated examples with the same motion patterns. Existing FSVOS datasets and methods typically focus on object categories, which are static attributes that ignore the rich temporal dynamics in videos, limiting their application in scenarios requiring motion understanding. To fill this gap, we introduce MOVE, a large-scale dataset specifically designed for motion-guided FSVOS. Based on MOVE, we comprehensively evaluate 6 state-of-the-art methods from 3 different related tasks across 2 experimental settings. Our results reveal that current methods struggle to address motion-guided FSVOS, prompting us to analyze the associated challenges and propose a baseline method, Decoupled Motion Appearance Network (DMA). Experiments demonstrate that our approach achieves superior performance in few shot motion understanding, establishing a solid foundation for future research in this direction.
- MoHoBench: Assessing Honesty of Multimodal Large Language Models via Unanswerable Visual Questions
Recently Multimodal Large Language Models (MLLMs) have achieved considerable advancements in vision-language tasks, yet produce potentially harmful or untrustworthy content. Despite substantial work investigating the trustworthiness of language models, MMLMs' capability to act honestly, especially when faced with visually unanswerable questions, remains largely underexplored. This work presents the first systematic assessment of honesty behaviors across various MLLMs. We ground honesty in models' response behaviors to unanswerable visual questions, define four representative types of such questions, and construct MoHoBench, a large-scale MMLM honest benchmark, consisting of 12k+ visual question samples, whose quality is guaranteed by multi-stage filtering and human verification. Using MoHoBench, we benchmarked the honesty of 28 popular MMLMs and conducted a comprehensive analysis. Our findings show that: (1) most models fail to appropriately refuse to answer when necessary, and (2) MMLMs' honesty is not solely a language modeling issue, but is deeply influenced by visual information, necessitating the development of dedicated methods for multimodal honesty alignment. Therefore, we implemented initial alignment methods using supervised and preference learning to improve honesty behavior, providing a foundation for future work on trustworthy MLLMs. Our data and code can be found at https://github.com/DSTTSD/MoHoBench.
- Evaluating Deep Learning Models for African Wildlife Image Classification: From DenseNet to Vision Transformers
Wildlife populations in Africa face severe threats, with vertebrate numbers declining by over 65% in the past five decades. In response, image classification using deep learning has emerged as a promising tool for biodiversity monitoring and conservation. This paper presents a comparative study of deep learning models for automatically classifying African wildlife images, focusing on transfer learning with frozen feature extractors. Using a public dataset of four species: buffalo, elephant, rhinoceros, and zebra; we evaluate the performance of DenseNet-201, ResNet-152, EfficientNet-B4, and Vision Transformer ViT-H/14. DenseNet-201 achieved the best performance among convolutional networks (67% accuracy), while ViT-H/14 achieved the highest overall accuracy (99%), but with significantly higher computational cost, raising deployment concerns. Our experiments highlight the trade-offs between accuracy, resource requirements, and deployability. The best-performing CNN (DenseNet-201) was integrated into a Hugging Face Gradio Space for real-time field use, demonstrating the feasibility of deploying lightweight models in conservation settings. This work contributes to African-grounded AI research by offering practical insights into model selection, dataset preparation, and responsible deployment of deep learning tools for wildlife conservation.
Solidot(15)
- Futurehome 申请破产后推送固件移除设备本地功能强推订阅制
挪威智能家居系统公司 Futurehome 在申请破产后,向其产品 Smarthub II 等推送了固件,移除了设备本地功能,将基本功能置于付费墙后,强推订阅制,除非客户订阅年费,否则设备基本功能都无法使用。Futurehome 的 Smarthub 最早是在 2016 年推出的,此后一直以一次性买断的方式销售其智能家居系列产品,包括智能恒温器、智能照明、智能火灾报警器和一氧化碳报警器。但在申请破产之后,该公司现在要求客户以每年 1,188 挪威克朗(约 116.56 美元)的订阅费使用基本功能。Futurehome 声称因为破产为了稳定运营引入订阅费是必要的举措。
- 微软承认它无法保障欧洲国家的数字主权
微软承认根据美国的 Cloud Act 法,它无法保障法国或其它欧洲国家客户的数字主权。Cloud Act 允许美国政府 访问美国科技公司的数据,即使数据存储在位于海外的服务器上。微软法国代表 Anton Carniaux 和 Pierre Lagarde 在接受法国议会咨询时表示,微软会抵制无根据的数据请求,但有法律义务遵守有效的数据请求。根据微软的透明度报告,它至今未收到美国政府要求访问储存在欧洲服务器上的信息的数据请求,但地缘政治紧张局势令欧盟国家对此感到担忧。Legrande 表示,过去三年来微软实现了一个技术环境以最大限度减少数据传输,将欧洲客户数据保存在欧盟境内。
- 调查发现六成美国人将 AI 用于搜索 37% 的人用于工作
根据 Associated Press-NORC Center for Public Affairs Research 的一项调查,60% 的美国成年人使用 AI 搜索信息,只有 37% 的受访者使用 AI 完成工作,40% 的受访者将 AI 用于头脑风暴。有 1437 名成年人在 7 月 10-14 日之间接受调查,结果显示不同代际在 AI 应用方面存在显著差距。30 岁以下的成年人中 74% 的人使用 AI 进行信息搜索,62% 使用 AI 进行创意构思,而 60 岁以上的成年人中,只有 23% 的人使用 AI 进行头脑风暴。约三分之一的美国人使用 AI 写电邮、创造或编辑图像,或娱乐目的。四分之一的人使用 AI 购物,16% 的人使用 AI 陪伴——在年轻人中这一比例达到 25%。
- 印度对美智能手机出货量首次超过中国
根据 Canalys 的数据,二季度(4-6 月)印度对美智能手机出货量首次超过中国(44% 对 25%)。美国智能手机出货量在 2025 年第二季度增长了 1%。与中国关税谈判结果的不确定性加速了供应链的重新定位。美国智能手机在中国组装的出货量份额从 2024 年第二季度的 61% 缩减至 2025 年第二季度的 25%。这种下降的大部分是由印度承担的;“印度制造”智能手机的总销量同比增长 240%,目前占美国进口智能手机的 44%,高于 2024 年第二季度仅占智能手机出货量的 13%。第二季度,iPhone 出货量同比下降 11% 至 1330 万部,三星出货量同比增长 38% 达到 830 万台,摩托罗拉增长 2% 至 320 万台。Google 和 TCL 跻身前五名,Google 增长 13% 至 80 万台,TCL 下降23% 出货量为 70 万台。
- Opera 指控微软使用反竞争策略推广自家浏览器 Edge
Opera 周二向巴西反垄断机构 CADE 提起诉讼,指控微软给予自家 Edge 浏览器相对于竞争对手的不公平优势。Opera 称微软在所有 Windows 设备上预装 Edge,将其设为默认浏览器,阻止竞争对手靠产品优势进行竞争。Opera 总法律顾问 Aaron McParlan 表示微软将 Opera 等浏览器排除在预装之外,阻碍用户下载其它浏览器。Opera 称自己是巴西第三大受欢迎 PC 浏览器,希望 CADE 对微软展开调查,要求微软确保公平竞争。
- 微软封禁 LibreOffice 开发者账号
开源办公软件 LibreOffice 在公开指责微软使用复杂的文件格式锁定 Office 用户之后,它的一位开发者 Mike Kaganski 账号被微软封禁,理由是从事了违反服务条款的活动。Mike 相信自己并没有违反任何微软的服务条款,以为是被机器人程序之类的工具错误标记了,因此提起了上诉,但他的经历可能和 Google 用户在账号被锁定之后的经历差不多:找不到人类客服能真正解决问题,即使有回应,也是自动程序,结果都是徒劳。这是依赖云服务的现代用户面临的一个困境。
- 俄罗斯堪察加半岛发生 M8.8 级地震
美国地质勘探局(USGS)报告,俄罗斯堪察加半岛 7 月 29 日 23:24 UTC 发生 M8.8 级地震。太平洋沿海地区发布海啸警报,据日本共同社报道,第一波海啸已经抵达日本海岸,北海道十胜港观测到了 40 厘米高的海啸。这次大地震是 2011 年日本东北地方太平洋近海地震(M9.1)以来的最强地震,为有记录以来第六强地震。目前暂无死亡报告。
- 成人网站指控 Meta 下载和做种成人视频
Meta 被图书作者指控从影子书库下载了至少 81.7 TB 电子书训练其 AI 模型,但 Meta 坚称它没有做种没有上传和共享电子书。现在成人网站 Strike 3 Holdings 指控 Meta 不仅下载了成人视频还做种了。Strike 3 Holdings 称对其私有 BitTorrent 跟踪工具的分析发现,Meta 至少从 2018 年起 BT 下载和做种受版权保护的成人视频,至少有意侵权了 2,396 部电影。Strike 3 Holdings 称 Meta 此举是通过做种热门成人内容尽可能快的下载数 TB 的电子书,Meta 做种的时间数天、数周甚至数月。部分成人电影可能还被秘密用于训练 AI 模型。Strike 3 Holdings 称 Meta 利用的是 BitTorrent 协议中名叫 tit-for-tat 的激励机制。Strike 3 Holdings 寻求巨额赔偿,申请禁令永久阻止 Meta 盗版其视频。
- 经济学家称挪威人太富裕且太舒坦了
挪威经济学家 Martin Bech Holte 在新书《The Country That Became Too Rich》中将矛头对准自己的国家,认为挪威太富裕而致使其经济健康受损。挪威的主权财富基金高达 2 万亿美元,相当于人均 34 万美元,过去二十年挪威的生产力增长在富裕国家中最低,而挪威人每年请的病假天数多达 27.5 天,在经合组织中最多。挪威人均教育经费为 2 万美元,经合组织的平均水平为 1.4 万美元,但自 2015 年以来挪威学生的考试成绩持续下降,目前低于经合组织平均水平。挪威从主权基金中提取的资金占到了国家年度预算的五分之一以上,而二十年前不到十分之一。
- 中国大学鼓励学生使用 AI
两年前,中国的大学警告学生不要在作业中使用 AI。今天的大学逆转了立场,鼓励学生使用 AI,只要他们遵守最佳实践(best practices)。根据麦可思研究院(MyCOS Institute)的调查,生成式 AI 已基本普及,只有 1% 的师生表示从未在学习或工作中使用 AI,近六成被调查者表示每天或每周数次使用 AI。随着 DeepSeek 的流行,人们日益将生成式 AI 视为国家骄傲的来源。高校的讨论重心逐渐从担忧 AI 对学术诚信的影响转向鼓励学生提高素养、提高生产力和保持领先。斯坦福大学 Institute for Human-Centered Artificial Intelligence (HAI)的调查发现,中国对 AI 热情领先于世界其它国家,八成的人对 AI 感到兴奋,相比下美国和英国仅为 35% 和 38%。MIT Technology Review 对 46 所中国顶尖大学 AI 战略的调查发现,几乎所有学校过去一年引入了跨学科的 AI 通识教育课和 AI 相关学位。清华大学、人民大学、南京大学和复旦大学等推出了 AI 素养课程和学位,面向所有学生而不只是计算机科学专业的学生。教育部于 2025 年 4 月发布了国家“AI +教育”指南,呼吁进行全面改革。
- 未成年人参与了秦兵马俑的制作
秦始皇帝陵博物院兵马俑研究考古人员通过超景深显微镜捕捉到了 2000 多年前清晰的指纹印记,提取了指纹100多枚。研究显示兵马俑的塑造者中有未成年人。秦始皇帝陵博物院馆员李晓溪介绍,工作人员在已经修复的 40 多件陶俑身上,提取了指纹100多枚。通过对指纹进行分析比对, 获取了陶工的年龄构成, 和性别比例等信息。初步分析显示,绝大多数指纹属于成年男性,与传统认知相符,同时也发现存在少量未成年人指纹。至于在整个制作过程中 ,他们都参与了什么样的环节,存在怎样的分工差异还需要进一步分析研究。
- 北京火狐从 9 月 29 日起不再运营 Firefox 在华业务
北京谋智火狐信息技术有限公司(北京火狐)宣布,2025 年 5 月 8 日,Mozilla 与北京火狐达成一致,北京火狐将不再运营 Firefox 浏览器及任何与 Firefox 有关的中国大陆业务。自 2025 年 9 月 29 日晚 24:00 起,Firefox 火狐中文官方网站(firefox.com.cn)、Firefox 火狐社区网站(mozilla.com.cn)、Firefox 火狐通行证账户服务(accounts.firefox.com.cn)及 Firefox 火狐主页(home.firefoxchina.cn)将正式停止运营,所有相关功能将终止。自即日起,Firefox 火狐中文官方网站 www.firefox.com.cn 将不再提供 Firefox 浏览器相关产品的下载。中国火狐用户需要在 2025 年 9 月 29 日前将北京火狐服务器上的数据同步到本地,在这之后所有数据都将删除。Mozilla Firefox 用户不受任何影响。
- 银河系发现首个幽灵行星状星云
仙女座大星系近旁发现被称为 SDSO1 的星云。根据一篇预印本,SDSO1 可能并非来自仙女座,而是位于银河系内的一个古老行星状星云,被命名为「幽灵行星状星云」(Ghost Planetary Nebula, GPN)。这项研究由多位业余天文摄影师联合完成。研究团队透过影像显示的弓形震波与尾迹结构,并比对银河系内,距离地球约 600 光年的共生双星 EG Andromedae(仙女座EG)的位置与特性,推论这片云气正是由该双星系统在数十万年前抛出的壳层所形成,进而成为行星状星云。分析显示,仙女座EG 以每秒约 107 公里的速度穿越星际介质,因而产生壮观的弓形结构与长达 45 光年的尾迹。这类行星状星云在演化后期已极度稀薄而黯淡,但因与周围介质的高速交互作用才显现,SDSO1 也因此成为已知首个被提出的幽灵行星状星云案例。研究团队还辨识出银河系内另有 7 个具有类似特征的幽灵行星状星云候选天体,显示这类古老天体可能比我们以往认为的更为普遍。
- 教育能否延缓认知衰退?
根据发表在《Nature Medicine》期刊上的一项研究,教育并不能延缓认知衰退。对 170,795 名 50 岁以上参与者的 407,356 份情景记忆评分和 6,472 人的 15,157 次脑部磁共振成像(MRI)扫描数据分析发现,教育程度与更好的记忆功能、更大的颅内体积相关,但并不能阻挡岁月对大脑的侵蚀。无论学历高低,大脑都会以相似的节奏慢慢萎缩,认知能力也会同步下滑。教育给了你一个更好的起跑线,但并不能改变衰老这场马拉松的节奏。虽然读书不能让你永葆青春,但至少能让你在变老的路上保持更清晰的头脑。
- 安全研究员发现 SkyRover X1 是更换品牌的大疆产品
深圳大疆的无人机产品在美国海关面临非正式的禁令。为规避禁令,大疆被发现通过更换品牌名称出售旗下产品。亚马逊上一款售价 750 美元的 SkyRover X1 无人机被发现就是大疆的 Mini 4 Pro。SkyRover X1 与 Mini 4 Pro 有着完全相同的规格和功能,能直接连接大疆的网络基础设施如 DJIGlobal、DJISupport 和 DJIEnterprise。黑客 Kevin Finisterre 使用大疆的凭证成功登陆了 SkyRover 的系统。安全研究员 Jon Sawyer 发现 SkyRover 的应用使用与大疆软件相同的加密密钥。