Education

Luanna delivered the student commencement address at Harvard’s 2025 University-wide graduation ceremony

"If we still believe in a shared future, let us not forget those who were labeled as enemies. They too are human. In seeing their humanity, we find our own. We do not rise by proving each other wrong. We rise by refusing to let one another go, bound by our shared humanity."

Google’s revolutionary AI video generation tool, VEO 3 (How will this impact to SEA?)

Google VEO 3 is Google's revolutionary AI video generator that creates 8-second videos with synchronized audio from text prompts. Now available in 71 countries including many SEA nations, it costs $19.99-$249.99/month. For Southeast Asia's mobile-first, culturally diverse region, VEO 3 democratizes video production, enabling small creators and cultural organizations to produce high-quality content at fraction of traditional costs. However, challenges include English-only audio output and risks of cultural misrepresentation, requiring careful adoption to preserve authentic regional storytelling traditions.

Keyu Jin Discusses China’s Evolving Trade War Strategy with U.S.

China's digital yuan strategy represents a comprehensive challenge to US financial hegemony, leveraging advanced fintech platforms like WeChat and cross-border payment systems like mBridge to create alternative monetary infrastructure. Through Belt and Road Initiative integration and partnerships with countries seeking dollar alternatives, China is systematically building parallel financial networks that could undermine SWIFT and dollar dominance. This technological-geopolitical fusion threatens to fragment global payments, reshape international monetary systems, and establish a multipolar digital financial order centered on Chinese-controlled platforms and yuan-denominated transactions.

MiniMax + n8n : 搭建个性化播客AI生成工作流搭建个性化播客AI生成工作流

n8n与MiniMax集成指南摘要: 本指南详细介绍了如何安装配置n8n工作流自动化平台,并与MiniMax AI服务进行集成。涵盖三种安装方式:npm、Docker和VPS部署。重点讲解MiniMax API凭证配置、HTTP Request节点设置,以及构建个性化播客生成工作流。包含文本转语音、语音克隆、视频生成等功能实现。提供完整的生产环境配置方案,包括Nginx反向代理、SSL证书和PM2进程管理,确保系统稳定运行。

科学界AlphaGo时刻,DeepMind发布AlphaEvolve

DeepMind的AlphaEvolve标志着AI从工具向科学发现合作伙伴的历史性跃进。该系统将Gemini大语言模型与进化算法相结合,实现了56年来矩阵乘法算法的首次突破,将4×4矩阵运算从49次优化至48次。更令人瞩目的是,它优化了自身的训练基础设施,将Gemini训练效率提升23%,并为谷歌数据中心节省数亿美元成本。AlphaEvolve不仅解决传统数学难题,更在75%的测试案例中重现最优解,20%的案例中发现新突破,预示着人机协作科研的新纪元。

HOW TO Build Qwen3’s Dual Mode AI (0.6B to 235B)

Qwen3 introduces revolutionary dual-mode AI architecture enabling dynamic switching between "syncing" (thinking) and "non-syncing" modes within a single model. Syncing mode provides explicit step-by-step reasoning for complex problems, while non-syncing mode delivers rapid, immediate responses. This elegant solution uses simple template differences during training, effectively eliminating the need for separate specialized models while maintaining both reasoning depth and response efficiency.

DeepSeek’s Latest Technological Innovations: Paving the Way for R2 Model

DeepSeek's technological innovations include Multi-Head Latent Attention reducing memory requirements by 85% versus competitors, advanced Mixture of Experts scaling to 671B parameters while maintaining training costs, and Multi-Token Prediction with 90% second-token accuracy. Their upcoming R2 model, rumored for May 2025 release.

Generative Pre-trained Auto-regressive Diffusion Transformer (GPDiT)

GPDiT (Generative Pre-trained Auto-regressive Diffusion Transformer) combines diffusion modeling with transformer architecture for powerful video recoloring. Operating in latent space with a parameter-free rotation-based time conditioning mechanism and lightweight causal attention, it enables remarkable few-shot learning capabilities. This breakthrough model generates temporally consistent, high-quality colorized videos from grayscale inputs with minimal examples needed for adaptation to specific styles.

A Smarter Way to Fine-Tune LLMs: Summary

The Reversal Challenge in LLM Fine-Tuning Recent research reveals standard fine-tuning causes LLMs to lose their reasoning flexibility. While models can perform logical reversals (if A→B, then B→A) and syllogisms through in-context learning, they fail at these same tasks after fine-tuning. A key discovery shows "format specialization" as the culprit, where models overfit to specific formats rather than understanding underlying logic. The innovative solution leverages the model's own in-context reasoning abilities to generate examples of desired reasoning patterns, then incorporates these into the fine-tuning dataset. This approach bridges the gap between the rigid fine-tuning process and the dynamic flexibility of in-context learning.

Qwen-3 Model Release Summary

Qwen-3: Frontier AI in an Open Package. Qwen-3 delivers eight powerful open-weight models featuring an innovative hybrid architecture that toggles between quick responses and deep reasoning. With sizes from 6B to 235B parameters, these models outperform competitors while requiring fewer resources. Pre-trained on 36 trillion tokens and featuring 128K context windows, Qwen-3 excels at coding and supports tool use with MCPs. Available under Apache 2.0, it represents a major advancement in accessible AI with multimodal capabilities across 119 languages.