Zoom + AWS Virtual Participant Framework now available on Github
We know customer experience is top of mind when building an app or integration. That’s why the Zoom Developer Platform is designed to give you the flexibility and agility you need to develop your ideal solution.
The platform’s openness gives you access to the solutions and resources you need to build across different platforms and operating systems, all while working seamlessly with the full stack of other vendors and services you use to deliver valuable experiences at scale.
At Zoomtopia 2022, we provided insight into how we’re working with best-in-class service providers to foster an ecosystem of developer platforms that work together seamlessly to help you bring solutions to market. One of the key partners in our efforts to reduce development friction is AWS, who’s working with us to develop a cost-effective approach that offers more capabilities. And today we’re pleased to announce that the AWS Virtual Participant Framework for RTC open-source project is now available to you and your developers on GitHub. This sample solution combines real-time communication capabilities of the Zoom Meeting SDK with AWS AI services (Amazon Transcribe), serverless computing (AWS Lambda and Fargate), and media streaming (Amazon Kinesis Video Stream) for developers to build meaningful experiences for end users.
Bringing AI to where connection and productivity happen
It’s in the news and it’s everywhere — we’re in the midst of a renaissance with AI. And that means AWS and Zoom’s shared customers and developers are ready for the next generation of productivity apps. Determined to help make these apps a reality, we doubled down on our partnership with AWS over the last year so developers could get them to market faster.
“The AWS Virtual Participant Framework for RTC removes undifferentiated heavy lifting in building custom integrations between Zoom and AWS. It also helps to reduce operational burden by standardizing meeting participant live-media access in the cloud,” said Sina Sojoodi, Principal Solutions Architect at AWS. “This translates to reduced costs of running fleets of virtual participants with containerization and serverless architecture.”
And we’re only getting started. With recent advancements in generative AI, natural language processing (NLP), and computer vision, we see tremendous opportunity for startup, enterprise, and public sector developers to build agent assist, live translation, visual content moderation, identity verification, and AI-assisted collaboration and productivity applications. Amazon Transcribe Live Call Analytics with Agent Assist, showcased in the GitHub project demo recording, is a great example of the types of solutions that are possible with this virtual participant framework.
Get started with the sample solution
To get up and running with the AWS Virtual Participant Framework for RTC, you can set up a test account in AWS, ideally with administrative privilege. Then head over to the Zoom App Marketplace and create a developer account if you don’t have one already. From there, follow the README instructions in the GitHub repo; you should be able to build an end-to-end solution that streams Zoom Meeting participant audio to the cloud and then be able to transcribe it live using Amazon Transcribe.
To access more knowledge about this from our developer community and Developer Relations team, be sure to take a look at the dedicated thread on the Zoom Developer Forum. We’d love for you to check it out, and look forward to your feedback.