The relationships among the major components of Microsoft Unified Communications Managed API 2.0 Core SDK appear in the following illustration.

The major components appearing in the illustration are LocalEndpoint(of which two implementations are ApplicationEndpointand UserEndpoint), Conversation, and CollaborationPlatform. A CollaborationPlatforminstance can manage multiple LocalEndpointinstances, and each LocalEndpointinstance can have multiple Conversationinstances.

In addition to listing many of the UCMA 2.0 Core SDK components, the elements in the illustration are arranged in two dimensions. The horizontal axis is divided into two categories: call controls and media controls. Call controls are concerned with signaling data, while media controls are concerned with the instant message (IM) and audio data that is communicated between participants.

The call control category is further subdivided into multiparty controls and two-party controls, which are concerned with, respectively, conversations among three or more participants or those between two participants.

The media control category is further subdivided into media flows, devices, and media providers. Each type of media (IM or audio/video) has its own type of flow. The devices in the devices column can be used to record an audio stream, play an audio stream, and send or receive telephone keypad tones. There are also two devices that, when used in conjunction with Microsoft.Speech object model, can be used to recognize speech, and to synthesize speech. Two of the media providers shown are provided with UCMA 2.0 Core SDK. The third (labeled as ContosoProvider in the illustration) is not provided, but can be implemented by third-party developers. Media providers are not directly accessible, but the flows they provide are accessible.

The color-coded components at the same horizontal level represent the components that take part in a particular communication mode. For example, the AudioVideoProvidersends audio/video media to an AudioVideoFlow, and then to either an AudioVideoCall(for two parties) or to an AudioVideoMcuSession(for more than two parties). The objects shown in the Devices column can attach an AudioVideoFlow, from which audio media can come ( Recorder, ToneController, SpeechRecognitionConnector), or to which audio media can go ( Player, ToneController, SpeechSynthesisConnector).

The vertical axis is divided into two principal categories: single modal and multimodal. These categories indicate whether communication occurs by means of a single mode (for example, using IM only) or by multiple modes (for example, using IM and audio).

UCMA 2.0 Core SDK provides built-in support for instant messaging and audio communication modalities. The platform can be extended to provide support for other modalities. The components in the top row in Conversationshow the components that third-party developers can create to provide this support.