.Guarantee being compatible along with numerous structures, including.NET 6.0,. Internet Structure 4.6.2, and.NET Standard 2.0 and above.Lessen dependencies to stop model disputes and the demand for binding redirects.Translating Sound Data.One of the major capabilities of the SDK is audio transcription. Programmers can easily record audio data asynchronously or even in real-time. Below is actually an example of just how to transcribe an audio documents:.using AssemblyAI.utilizing AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood reports, similar code may be made use of to accomplish transcription.wait for utilizing var flow = brand-new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise holds real-time sound transcription making use of Streaming Speech-to-Text. This function is specifically practical for requests demanding immediate processing of audio records.utilizing AssemblyAI.Realtime.await using var scribe = brand new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining sound from a mic for instance.GetAudio( async (piece) => await transcriber.SendAudioAsync( portion)).wait for transcriber.CloseAsync().Using LeMUR for LLM Applications.The SDK incorporates with LeMUR to permit creators to build huge language model (LLM) applications on vocal records. Listed here is actually an instance:.var lemurTaskParams = brand new LemurTaskParams.Prompt="Deliver a short summary of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intelligence Styles.Additionally, the SDK possesses built-in help for audio knowledge versions, enabling view review as well as various other innovative components.var transcript = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, explore the formal AssemblyAI blog.Image resource: Shutterstock.