iOS application example to capture stream from device screen
This example can be used to publish WebRTC stream from device screen and a separate audio stream for publishers voice
The main application view is shown below. Inputs:
- WCS Websocket URL
- screen video stream name to publish
- audio stream name to publish
Application view when screen sharing is started
A special extension process is used to capture video from screen. This pocess works until device is locked or screen cpturing is stopped manually.
Analyzing example code
To analyze the code take ScreenCapturer example version which is available on GitHub.
Classes
- main application view class: ScreenCapturerViewController (implementation file ScreenCapturerViewController.swift)
- extension implementation class: ScreenCapturerExtensionHandler (implementation file ScreenCapturerExtensionHandler.swift)
1. Import API code
import FPWCSApi2Swift
2. Session creation to publish audio
WCSSession, WCSSession.connect code
The following parameters are passed to WCSSession constructor:
- WCS server URL
- backend application name defaultApp
@IBAction func publishAudioPressed(_ sender: Any) { if (publishAudioButton.title(for: .normal) == "Publish Audio") { let options = FPWCSApi2SessionOptions() options.urlServer = self.urlField.text options.appKey = "defaultApp" do { try session = WCSSession(options) } catch { print(error) } ... session?.connect() changeViewState(publishAudioButton, false) } else { ... } }
3. Audio publishing
WCSSession.createStream, WCSStream.publish code
The following parameters are passed to createStream method:
- stream name to publish
- constraints to capture audio only
func onConnected(_ session:WCSSession) throws { let options = FPWCSApi2StreamOptions() options.name = publishAudioName.text options.constraints = FPWCSApi2MediaConstraints(audio: true, video: false); do { publishStream = try session.createStream(options) } catch { print(error); } ... do { try publishStream?.publish() } catch { print(error); } }
4. Screen capturer extension parameters setup
UserDefaults.suiteName parameter must be equal to extension application group id
@objc func pickerAction() { //#WCS-3207 - Use suite name as group id in entitlements let userDefaults = UserDefaults.init(suiteName: "group.com.flashphoner.ScreenCapturerSwift") userDefaults?.set(urlField.text, forKey: "wcsUrl") userDefaults?.set(publishVideoName.text, forKey: "streamName") }
5. Screen capturer class setup
fileprivate var capturer: ScreenRTCVideoCapturer = ScreenRTCVideoCapturer()
6. Receiving screen capture parameters in extension code
override func broadcastStarted(withSetupInfo setupInfo: [String : NSObject]?) { //#WCS-3207 - Use suite name as group id in entitlements let userDefaults = UserDefaults.init(suiteName: "group.com.flashphoner.ScreenCapturerSwift") let wcsUrl = userDefaults?.string(forKey: "wcsUrl") if wcsUrl != self.wcsUrl || session?.getStatus() != .fpwcsSessionStatusEstablished { session?.disconnect() session = nil } self.wcsUrl = wcsUrl ?? self.wcsUrl let streamName = userDefaults?.string(forKey: "streamName") self.streamName = streamName ?? self.streamName ... }
7. Session creation to publish screen stream
WCSSession, WCSSession.connect code
if (session == nil) { let options = FPWCSApi2SessionOptions() options.urlServer = self.wcsUrl options.appKey = "defaultApp" do { try session = WCSSession(options) } catch { print(error) } ... session?.connect() }
8. Screen stream publishing
WCSSession.createStream, WCSStream.publish code
The following parameters are passed to createStream method:
- stream name to publish
- ScreenRTCVideoCapturer object to capture video from screen
func onConnected(_ session:WCSSession) throws { let options = FPWCSApi2StreamOptions() options.name = streamName options.constraints = FPWCSApi2MediaConstraints(audio: false, videoCapturer: capturer); try publishStream = session.createStream(options) ... try publishStream?.publish() }