June 5th, 2025
11 reactions

Microsoft and LangChain: Leading the Way in AI Security for Open Source on Azure

Marlene Mhangami
Senior Developer Advocate, Python & AI

For developers building in the field of AI, the industry moves so quickly that we often prioritise speed and execution over everything else. To keep up with the latest changes, many developers and enterprises are turning to open-source AI tools. One of the most popular tools today is LangChain, an open-source framework for building AI applications. Just over a year ago, Microsoft launched the Secure Future Initiative (SFI) to improve the security of Microsoft, our customers, and the industry at large. This initiative includes open source, with the goal of helping developers be free to innovate, not just quickly but securely.

Chart showing downloads of LangChain and OpenAI SDK for each month from November through April. As of April, LangChain downloads exceed ApenAI SDK downloads.
In Python, LangChain is now more downloaded than OpenAI on PyPI. Source: LangChain Interrupt 2025 Keynote

If you’re building AI applications for enterprise, LangChain provides building blocks for multi-agent architectures and a range of Large Language Models (LLMs) and vector store integrations provided by 3rd parties. While its large community ecosystem offers many advantages, its network of partner integrations can introduce security considerations that we should pay attention to.  

Microsoft’s AI Security Guidance outlines two main risks with LLM-driven apps:  

  • Information Leakage: Sensitive data being unintentionally exposed to unauthorized parties, potentially leading to data breaches.   
  • Privilege Escalation: When a user gains higher access levels than intended, allowing them to perform unauthorized actions.  

LangChain provides hundreds of integrations to 3rd party services, including many experimental technologies. Due to the nature of agentic flows, building LLM-driven apps often includes code execution and evaluation, as well as data processing. All these elements, expose applications to the risks mentioned above.   

Microsoft is committed to helping support developers and enterprises safely adopt the latest innovations, including open source. We care about open source and maintain two open AI agent frameworks, Semantic Kernel and Autogen. We also have a history of helping improve the security of some of the largest OSS projects and ecosystems. In line with this mission, earlier this year our security team reviewed LangChain and found several security issues in langchain-community, LangChain’s third-party integrations package, and langchain-experimental, the project’s package intended for research and experimental usage. While these two packages are optional, they could potentially be used by customers who are unaware of the distinctions between LangChain core, community and experimental.   

Diagram showing LangChain has a large ecosystem including packages they provide, official provider integrations and community third party integrations.
LangChain has a large ecosystem including packages they provide, official provider integrations and community third party integrations

We’ve seen an increasing number of developers and enterprises using LangChain and LangGraph, so we took these issues seriously and reached out to LangChain to chart a path forward. Microsoft Principal Security Assurance Manager, Michael Scovetta, had the following to say about the situation: 

“When we examined LangChain and its associated projects, we identified several areas for security improvement to address before using it in our production systems. Microsoft is committed to making Azure the most secure place for running AI workloads, and our Developer Relations team is working with LangChain to improve security and make it easier for organizations to use safely. – Michael Scovetta, Microsoft Principal Security Assurance Manager“  

We’re excited to be partnering with LangChain to address these security issues, beginning with Azure integrations and then moving on to full coverage of LangChain. LangChain CEO, Harrison Chase, says: 

“Over the past year and a half we’ve taken steps to make LangChain enterprise ready. Step one in this was rearchitecting the ecosystem to make packages like langchain-community and langchain-experimental optional and separate packages. As a next step, we’re excited to work with Microsoft to support more enterprises in their journey to leverage AI safely and effectively. – Harrison Chase, LangChain Co-Founder and CEO“ 

Microsoft is providing engineering hours, continuous integration tools and workflows to detect and prevent insecure code from being merged into the project in the future. We’re also supporting LangChain through Alpha-Omega, where we’re helping the project improve its documentation so organizations can better understand and avoid potential security pitfalls.   

We’ve worked with the LangChain team to create a LangChain-Azure mono-repo, and together we’re aiming to make Azure the most secure place for building with AI! To get started with Python, you can access our complete catalogue of AI models from Azure AI Foundry by installing our new Azure AI package. Run

pip install langchain-azure-ai

And execute the following code:   

from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_core.messages import HumanMessage, SystemMessage

model = AzureAIChatCompletionsModel(
    endpoint="https://{your-resource-name}.services.ai.azure.com/models",
    credential="your-api-key",
    model= "deepseek/DeepSeek-R1-0528"
)

messages = [
    HumanMessage(content="Translate the following from English into Italian: 'hi!'")
]

message_stream = model. stream(messages)
print("".join(chunk.content for chunk in message_stream))

You can also access Azure AI Foundry in JavaScript by using the new langchain-azure-js package in LangChain. We’ve also updated the community created and owned LangChain4J, package for Java developers.

As we look to the future, we think our work with LangChain will provide a model for AI developers building with open-source software. Microsoft runs on trust, and we will continue to work in the open to make AI apps built with this framework secure to run on Azure. 

Author

Marlene Mhangami
Senior Developer Advocate, Python & AI

Marlene is a Senior Developer Advocate specializing in Python and AI at Microsoft, a computer scientist, keynote speaker and explorer. She is the current chair of the Association for Computing Machinery(ACM) practitioner board, was the previous vice chair of the Python Software Foundation and led the first PyCon Africa.

2 comments

  • Mahadevan Padmanabhan · Edited

    Hi Good update. Was there any note on using Langraph and Microsoft take on this. What about other frameworks like Crewai etc.,

    • Marlene MhangamiMicrosoft employee Author 19 hours ago · Edited

      Hi! Yes, LangGraph is included in this work as it is LangChain’s AI Agent framework. It is mostly secure because it falls under core. You can see my guide for using it to build a Deep Researcher from Build here aka.ms/build/lab331.
      CrewAI is fantastic and we will plan to do more with these frameworks in the future <3