备注请放最后面 违者新闻文章删除
1.媒体来源:
msn.com
2.记者署名:
Zac Bowden
3.完整新闻标题:
Microsoft announces distilled DeepSeek R1 models for Windows 11 Copilot+ PCs
微软宣布为 Windows 11 Copilot+ PC 推出知识蒸馏版 DeepSeek R1 模型
4.完整新闻内文:
Microsoft has announced that it will be bringing “NPU-optimized” versions
of the DeepSeek-R1 AI model to Copilot+ PCs soon, first with Snapdragon X
devices, followed by PCs with Intel Lunar Lake and AMD Ryzen AI 9 processors.
The first release will be the DeepSeek-R1-Distill-Qwen-1.5B model, and will
be available via the Microsoft AI Toolkit for developers. 7B and 14B variants
will arrive later.
Windows 11 Copilot+ PCs are devices that are equipped with at least 256GB
storage, 16GB RAM, and an NPU that can output a minimum of 40 TOPS of power.
This means some older NPU-equipped PCs won't be able to run these models
locally.
"These optimized models let developers build and deploy AI-powered
applications that run efficiently on-device, taking full advantage of the
powerful NPUs in Copilot+ PCs” says a Microsoft blog post announcing
DeepSeek R1 support. “With our work on Phi Silica, we were able to harness
highly efficient inferencing – delivering very competitive time to first
token and throughput rates, while minimally impacting battery life and
consumption of PC resources … Additionally, we take advantage of Windows
Copilot Runtime (WCR) to scale across the diverse Windows ecosystem with ONNX
QDQ format.”
In the blog post, Microsoft highlights how it worked to ensure the R1 models
could run locally on NPU-based hardware. “First, we leverage a sliding
window design that unlocks super-fast time to first token and long context
support despite not having dynamic tensor support in the hardware stack.
Second, we use the 4-bit QuaRot quantization scheme to truly take advantage
of low bit processing.”
Microsoft says the 1.5B Distilled R1 model will be available soon, and that
it will be accessible via the AI Toolkit extension in VS Code. Developers can
use Playground to experiment with DeepSeek R1 locally on compatible Copilot+
PCs. In addition to supporting DeepSeek R1 locally, Microsoft is also making
these AI models available in the cloud via Azure AI Foundry. “As part of
Azure AI Foundry, DeepSeek R1 is accessible on a trusted, scalable, and
enterprise-ready platform, enabling businesses to seamlessly integrate
advanced AI while meeting SLAs, security, and responsible AI commitments—all
backed by Microsoft’s reliability and innovation.”
Microsoft has moved fast to support DeepSeek R1, even as US tech firms panic
over its existence. OpenAI now claims that DeepSeek has stolen proprietary
code to develop their AI model, which cost less than $10 million to develop.
This stands in stark contrast to the AI models developed by US firms, which
has cost billions so far.
微软宣布即将为 Copilot+ PC 推出 “NPU 优化”版 的 DeepSeek-R1 AI 模型,首先登
陆 Snapdragon X 设备,随后将支援搭载 Intel Lunar Lake 和 AMD Ryzen AI 9 处理器
的 PC。首个发布的版本将是 DeepSeek-R1-Distill-Qwen-1.5B,并将透过 Microsoft
AI Toolkit 提供给开发者使用。7B 和 14B 版本则会在稍后推出。
(备注:Qwen是阿里巴巴的开源模型通义千问)
Windows 11 Copilot+ PC 的要求
Windows 11 Copilot+ PC 需要至少 256GB 储存空间、16GB RAM,
以及 至少 40 TOPS 的 NPU 算力。这意味着一些较旧的 NPU 设备将无法在本地运行这些
AI 模型。
微软在官方部落格中表示:
“这些优化后的模型让开发者能够构建并部署高效运行于本地的 AI 应用,充分发挥
Copilot+ PC 强大 NPU 的优势。”“透过我们在 Phi Silica 上的努力,我们成功提升
推理效率,不仅实现了极具竞争力的首次输出时间与吞吐率,还能最大限度降低电池
消耗及 PC 资源使用量……此外,我们还利用 Windows Copilot Runtime (WCR) 来支
援 ONNX QDQ 格式,使其能够在 Windows 生态系统中广泛运行。”
DeepSeek R1 的优化技术
在部落格文章中,微软还详细介绍了如何让 R1 模型能够本地运行于 NPU 硬件 上:
滑动视窗设计(Sliding Window Design)
即使硬件不支援动态张量(Dynamic Tensor),仍然能够实现 极快的首次输出时间 并
支援长上下文。
4-bit QuaRot 量化技术
充分利用 低位元计算,提升 AI 模型运行效率。
微软表示,1.5B Distilled R1 模型 很快就会上线,开发者可以透过 VS Code 的 AI
Toolkit 扩展 获取。此外,开发者还可以在 Playground 中本地运行 DeepSeek R1,前
提是他们拥有符合条件的 Copilot+ PC。
Azure AI Foundry 云端支援
除了本地支援 DeepSeek R1 之外,微软还计划透过 Azure AI Foundry 提供这些 AI
模型:“作为 Azure AI Foundry 的一部分,DeepSeek R1 将能够在一个 值得信赖、
可扩展、企业级的平台 上运行,使企业能够无缝整合先进 AI,同时符合 SLA、资安
规范及负责任 AI 的承诺——这一切都由微软的可靠性与创新提供支持。”
美国科技界对 DeepSeek R1 的反应
微软迅速行动支持 DeepSeek R1,与此同时,美国科技企业却因其存在而陷入恐慌
。OpenAI 目前声称 DeepSeek 窃取了专有程式码 来开发其 AI 模型,而 DeepSeek R1
的开发成本仅 不到 1000 万美元。相比之下,美国企业开发的 AI 模型已耗资数十亿美
元,这一差距引发了更多关注。
5.完整新闻连结 (或短网址)不可用YAHOO、LINE、MSN等转载媒体:
https://bit.ly/42z9B7O
6.备注:
微软目前是OpenAI最大的单一法人股东
不但要在云端支援,微软还会协助将DeepSeek转换成NPU友善的模型格式,让大家都可以
在AI PC上使用到DeepSeek-R1的本地模型
DeepSeek-R1有两个知识蒸馏版本,一个用llama一个用Qwen,根据验证Qwen表现比llama
好一些