Introduction to Sync and Flex Audio SDKs

March 2, 2023
Introduction to Sync and Flex Audio SDKs

How Aircore’s New Audio SDK’s Finally Make Real-Time In-App Communication Easy

When it comes to implementing WebRTC, the saying goes that the first 50% is easy, the next 30% is hard, and the last 20% is practically impossible. This is especially true in cases involving globally distributed users with a wide variety of hardware and network conditions. With a big enough team and plenty of time, you can probably get there. But you have to ask whether you are better off investing your time and money on other things that matter more to your domain and application.

The good news? Aircore is launching two new audio SDKs, built on expertise honed over eleven years in the WebRTC space, that streamline and simplify WebRTC for a variety of developer audiences – even in scenarios involving disparate communicators, hardware, and networks.

Here’s an overview of the key things developers need to know about these new audio SDKs.

Aircore Audio SDKs

The goal of our audio SDKs is to abstract away the complexity of WebRTC and provide higher-level abstractions via simple and top-quality client APIs. These APIs use our distributed global server infrastructure to add real-time audio to your application, providing low-latency and global scale.

As we started building out the APIs, our main goal was to help developers who need:

  • An easy way to integrate common audio chat use cases effortlessly into their application in minutes with a turnkey UI solution.
  • A flexible audio API to build a wide range of audio experiences. These devs will also tend to build their own UI for their custom use case.

This led us to build two distinct SDKs to serve those disparate needs: Sync Audio and Flex Audio.

Sync Audio SDK

While the Flex SDK gives control to application developers and lets them build custom UI and unique experiences, the reality is that a lot of applications with audio features, like Slack huddle, Twitter spaces, or the Figma audio collaboration, have standard UI.

That’s why Aircore’s Sync SDK:

  • Provides a simple UI solution you can integrate into your application in minutes.
  • Handles all UI interactions, state machines, and error states.
  • Provides a dynamic inbuild layout engine that is responsive to different screen sizes, so you don’t have to reinvent the wheel on the UI and UX side.
  • Makes the UI styling configurable, so you can easily brand the UI and make it look native to your application’s look and feel.

The API surface for SDK is mostly around style and branding. Like the Flex SDK, the Sync SDK uses our RTDN backend infrastructure for low-latency and global scale.

Check out our Sync Audio SDKs here

Aircore audio SDK image 1
Aircore audio SDK image 2

Customizing the Sync Web SDK

Flex Audio SDK

With the Flex Audio APIs, you can send and receive audio streams without worrying about device audio details, encoding technologies, network, and CPU optimizations. These APIs work with our real-time delivery network (RTDN), a globally distributed infrastructure with smart routing to get you the lowest latency and elastic scale.

Here are some of the things our Flex Audio SDK provides:

  • A simple API that is well-designed and easy to use but provides the flexibility to support a wide variety of real-time audio use cases. Even though we are starting with the audio chat use case, we envision our APIs evolving to support podcasting, music, and other use cases we have yet to predict.
  • Client-side APIs that seamlessly scale from 1:1 calls to large group events.

The Flex audio API lets you build your own custom user experience. To use the Flex APIs, your designers and engineers will define and implement the user experience, leaving the real-time calling logic to the Flex SDK.

We are starting with Flex Audio SDKs for iOS and Android, with a Web SDK coming soon. If you are interested in other platforms or frameworks, please contact us.

Check out our Flex SDKs here

Tackling Persistent Challenges in WebRTC

Our team has been working on real-time technologies, specifically WebRTC, for a decade. We started in 2012 when WebRTC was in its infancy, and built one of the world’s first WebRTC video platforms, vLine.com. This was at a time when the WebRTC stack from Google and Chrome was much less mature than it is today, and using that stack to build a platform like vLine was therefore no easy task.

Things have come a long way since then, but one fact hasn’t changed – WebRTC is still hard. The protocol, network algorithms, and audio-video subsystems – including coding and decoding – are highly complex.

When starting with WebRTC, many developers fall into the trap of “How hard can this be?” – especially when it’s easy enough, with a bit of JS code, to get a sample call between two users on a good network from a browser on your laptop. The hard part is ensuring all the end-to-end pieces work seamlessly when disparate groups of people with a wide variety of hardware in various network conditions try to communicate.

Adding real-time audio to apps has been far too complicated for far too long. With these new SDKs, our mission is to finally make it as easy as developers need it to be.

We would love your feedback on both of these SDKs. Get in touch here.