Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Overview

Since build 5.2.2022, WCS supports hardware acceleration using NVIDIA GPU while decoding and encoding video.

Server requirements

  • Ubuntu 22.04 or newer
  • NVIDIA drivers installed
  • NVIDIA GPU card installed

Use cases supported

  • H264 and H265 transcoding
  • H264 mixer (encoding only)
  • HLS ABR H264
  • WebRTC ABR H264


If a stream is decoded on CPU then it will be encodec on CPU too (excluding a mixer: incoming streams are decoded on CPU, but outgoing stream may be encoded on GPU) 

Use cases unsupported

GPU transcoding is not supported for:


In all of the cases a stream must be decoded on CPU, hardware acceleration should be disabled

It is recommended to deploy a separate server without GPU for the cases listed above.

Configuration

NVIDIA CUDA drivers must be installed to the server using official manual for hardware acceleration to work. CUDA toolkit is not needed to be installed:

sudo apt-get install -y cuda-drivers

If the drivers are installed successfully, the nvidia-smi utility displays a current GPU state

Hardware acceleration support should be enabled by the following parameter

enable_hardware_acceleration=true

CUDA encoder/decoder has a priority for supported use cases when hardware acceleration is enabled!

GPU load tuning

GPU default settings are enough for a minimal transcoding sessions and testing. But it is necessary to set up some parameters like a maximum available encoders per GPU and per server for production use. The configuration file /usr/local/FlashphonerWebCallServer/conf/gpu_config.json is used to do this

[
  {
    "gpuName" : "Tesla P100-PCIE-16GB",
    "disabled" : false,
    "config" : {
      "maxEncodeSessions" : 95,
      "maxHostEncodeSessions" : 95,
      "maxDecodeSessions" : 19,
      "maxEncoderUtilization" : 90,
      "maxDecoderUtilization" : 90,
      "minMemoryAvailable" : "4 GB"
    }
  }
]
[
  {
    "pciBusId" : "0000:02:00.0",
    "disabled" : false,
    "config" : {
      "maxEncodeSessions" : 95,
      "maxHostEncodeSessions" : 95,
      "maxDecodeSessions" : 19,
      "maxEncoderUtilization" : 90,
      "maxDecoderUtilization" : 90,
      "minMemoryAvailable" : "4 GB"
    }
  }
]

Where:

  • gpuName - GPU name
  • pciBusId - GPU PCI bus identifier
  • disabled - do not use this GPU
  • maxEncodeSessions - maximum encoders quantity per GPU
  • maxHostEncodeSessions - maximum encoders quantity per server
  • maxDecodeSessions - maximum decoders quantity per GPU
  • maxEncoderUtilization - maximum encoding GPU load, in percents
  • maxDecoderUtilization - maximum decoding GPU load, in percents
  • minMemoryAvailable - minimal free memory amount to be available

If the server has more then one GPU, the configuration with GPU name is applied to all the GPUs with the same name. The configuration with PCI bus Id is applied to the GPU with this Id only because PCI bus Ids are unique.

GPU testing and configuration file  generation tool

There is a special tool to test a GPU load capability on the server and to generate a configuration file. The tool should be launched as follows:

/usr/local/FlashphonerWebCallServer/tools/gpu_calibration_tool.sh sample.mp4 --separate-test --interval=20

Where:

  • sample.mp4 - a source video file for testing, should contain a video track in a maximum resolution to use in production, for example 1920x1080
  • --separate-test - test all the GPUs installed and available to operating system on the server; if the key is not set, only one random GPU will be tested
  • --interval - interval in seconds between a different chips testing

The tool uses encoding profiles set in /usr/local/FlashphonerWebCallServer/conf/hls_abr_profiles.yml file (see HLS ABR on a single node) to test encoding.

The tool makes a test and creates the configuration file with a maximum GPU parameters by test results. If more than one chips are tested, a PCI bus Ids will be set in configuration file. If only one chip is tested (no --separate-test key), a GPU name will be set in configuration file.

If the server has a different types of NVIDIA GPUs and --separate-test key is not set, a one GPU of the type will be tested

WCS must be stopped during the test!

Decoder/encoder reconfiguration after source stream resolution changing

A WebRTC stream publishing resolution may occasionally change. Also, a screen sharing stream capturing a single window will change its resolution when window size is changed. A GPU decoder parameters should be changed after every stream resolution changes. The default parameters

hardware_acceleration_enable_soft_reconfiguration=true
hardware_acceleration_reconfigure_max_width=1920
hardware_acceleration_reconfigure_max_height=1088

allow only resolution settings to be changed if the stream resolution has decreased. But decoder settings will be reset if the resolution has increased over the threshold set. Actually, like a new decoder is created.

The following parameter

hardware_acceleration_enable_soft_reconfiguration=false

allows a decoder settings to be reset on every resolution change. This may dramatically affect a performance.

Monitoring

A current GPU usage statistics may be received using REST API.

A REST-query should be HTTP/HTTPS POST request as follows:

  • HTTP: http://streaming.flashphoner.com:8081/rest-api/gpu/info
  • HTTPS: https://streaming.flashphoner.com:8444/rest-api/gpu/info

Where:

  • streaming.flashphoner.com - WCS server address
  • 8081 - the standard REST / HTTP port of the WCS server
  • 8444 - the standard HTTPS port
  • rest-api - the required part of the URL
  • /gpu/info - REST-method to use

REST methods and responses

/gpu/info

Get current GPU usage statistics

Request example

POST /rest-api/gpu/info HTTP/1.1
Host: localhost:8081
Content-Type: application/json

Response example

HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Type: application/json

{
  "cudaVersion": "12.4",
  "nvencVersion": "12.2",
  "driverVersion": "550.90.07",
  "nvmlVersion": "12.550.90.07",
  "numOfAvailableDevices": 1,
  "numOfDecodingSessions": 1,
  "numOfEncodingSessions": 5,
  "numOfHostEncodingSessions": 5,
  "deviceList": [
    {
      "name": "Tesla P100-PCIE-16GB",
      "pciBusId": "0000:02:00.0",
      "computeCapability": "6.0",
      "computeMode": 0,
      "computeModeDescription": "Default compute mode (Multiple contexts allowed per device)",
      "numOfDecodingSessions": 1,
      "numOfEncodingSessions": 5,
      "numOfHostEncodingSessions": 5,
      "fpsStats": {
        "decoderFps": 30,
        "encoderFps": 30
      },
      "nativeEncoderStats": {
        "averageFps": 29,
        "averageLatency": 1225
      },
      "utilizationState": {
        "computeUtilization": 1,
        "memUtilization": 0,
        "decoderUtilization": 5,
        "encoderUtilization": 4,
        "totalMem": "16 GB",
        "freeMem": "15.26 GB",
        "usedMem": "752.25 MB",
        "usedPciBandwidth": "984.76 KB"
      },
      "supportedEncodeCodecs": {
        "H265": {
          "minWidth": 65,
          "minHeight": 33,
          "maxWidth": 4096,
          "maxHeight": 4096
        },
        "H264": {
          "minWidth": 33,
          "minHeight": 17,
          "maxWidth": 4096,
          "maxHeight": 4096
        }
      },
      "supportedDecodeCodecs": {
        "H265": {
          "minWidth": 144,
          "minHeight": 144,
          "maxWidth": 4096,
          "maxHeight": 4096
        },
        "H264": {
          "minWidth": 48,
          "minHeight": 16,
          "maxWidth": 4096,
          "maxHeight": 4096
        }
      }
    }
  ]
}

Return codes

CodeReason
200OK
500Internal server error

Parameters

ParameterDescriptionExample
cudaVersionCUDA version12.4 
nvencVersionNVENC version12.2
driverVersionNVIDIA drivers version550.90.07
nvmlVersionNVIDIA management library version12.550.90.07
numOfAvailableDevicesGPU available count1
numOfDecodingSessionsDecoders count1
numOfEncodingSessionsEncoders count5
numOfHostEncodingSessionsEncoders count per server5
nameGPU nameTesla P100-PCIE-16GB
pciBusIdGPU PCI bud Id0000:02:00.0
computeCapabilityGPU capabilities Id6.0
computeModeGPU compute mode0
computeModeDescriptionGPU compute mode descriptionDefault compute mode (Multiple contexts allowed per device)
decoderFpsDecoding FPS30
encoderFpsEncoding FPS30
averageFpsAverage FPS by native GPU stats29
averageLatencyAverage latency in microseconds by native GPU stats1225
computeUtilizationGPU compute utilization percent1
memUtilizationMemory utilization percent0
decoderUtilizationDecoder utilization percent5
encoderUtilizationEncoder utilization percent4
totalMemTotal memory amount16 GB
freeMemFree memory amount15.26 GB
usedMemUsed memory amount752.25 MB
usedPciBandwidthPCI bus bandwith used984.76 KB
minWidthMinimal picture width to decode/encode33
minHeightMinimal picture height to decode/encode17
maxWidthMaximum picture width to decode/encode4096
maxHeightMaximum picture height to decode/encode4096
  • No labels