How Aircore’s New Audio SDK’s Finally Make Real-Time In-App Communication Easy
When it comes to implementing WebRTC, the saying goes that the first 50% is easy, the next 30% is hard, and the last 20% is practically impossible. This is especially true in cases involving globally distributed users with a wide variety of hardware and network conditions. With a big enough team and plenty of time, you can probably get there. But you have to ask whether you are better off investing your time and money on other things that matter more to your domain and application.
The good news? Aircore is launching two new audio SDKs, built on expertise honed over eleven years in the WebRTC space, that streamline and simplify WebRTC for a variety of developer audiences – even in scenarios involving disparate communicators, hardware, and networks.
Here’s an overview of the key things developers need to know about these new audio SDKs.
The goal of our audio SDKs is to abstract away the complexity of WebRTC and provide higher-level abstractions via simple and top-quality client APIs. These APIs use our distributed global server infrastructure to add real-time audio to your application, providing low-latency and global scale.
As we started building out the APIs, our main goal was to help developers who need:
This led us to build two distinct SDKs to serve those disparate needs: Sync Audio and Flex Audio.
While the Flex SDK gives control to application developers and lets them build custom UI and unique experiences, the reality is that a lot of applications with audio features, like Slack huddle, Twitter spaces, or the Figma audio collaboration, have standard UI.
That’s why Aircore’s Sync SDK:
The API surface for SDK is mostly around style and branding. Like the Flex SDK, the Sync SDK uses our RTDN backend infrastructure for low-latency and global scale.
Check out our Sync Audio SDKs here
Customizing the Sync Web SDK
With the Flex Audio APIs, you can send and receive audio streams without worrying about device audio details, encoding technologies, network, and CPU optimizations. These APIs work with our real-time delivery network (RTDN), a globally distributed infrastructure with smart routing to get you the lowest latency and elastic scale.
Here are some of the things our Flex Audio SDK provides:
The Flex audio API lets you build your own custom user experience. To use the Flex APIs, your designers and engineers will define and implement the user experience, leaving the real-time calling logic to the Flex SDK.
We are starting with Flex Audio SDKs for iOS and Android, with a Web SDK coming soon. If you are interested in other platforms or frameworks, please contact us.
Our team has been working on real-time technologies, specifically WebRTC, for a decade. We started in 2012 when WebRTC was in its infancy, and built one of the world’s first WebRTC video platforms, vLine.com. This was at a time when the WebRTC stack from Google and Chrome was much less mature than it is today, and using that stack to build a platform like vLine was therefore no easy task.
Things have come a long way since then, but one fact hasn’t changed – WebRTC is still hard. The protocol, network algorithms, and audio-video subsystems – including coding and decoding – are highly complex.
When starting with WebRTC, many developers fall into the trap of “How hard can this be?” – especially when it’s easy enough, with a bit of JS code, to get a sample call between two users on a good network from a browser on your laptop. The hard part is ensuring all the end-to-end pieces work seamlessly when disparate groups of people with a wide variety of hardware in various network conditions try to communicate.
Adding real-time audio to apps has been far too complicated for far too long. With these new SDKs, our mission is to finally make it as easy as developers need it to be.
We would love your feedback on both of these SDKs. Get in touch here.