WebRTC Qos strategy -- PACER network message smoothing strategy

brief introduction

Pacer network packet smoothing strategy is one of webrtc Qos strategies, which is aimed at the data sender. If it is a simple audio data communication, because the length of one frame of audio data is fixed and the audio bit rate is stable, there will be no phenomenon of high and low, so pacer can not be considered. However, for video data, the data volume of one frame of video may be large, which is already larger than the MTU of the network, especially the data volume of I frame (key frame) is usually much larger than the MTU, so it needs to be encapsulated in multiple RTP messages. If these video RTP messages occur on the network at the same time, it may cause network attenuation and communication deterioration. WebRTC introduces pacer, which will adjust the contracting frequency according to the code rate calculated in the estimator, in order to make the code rate of video data installation evaluation evenly distributed in each time slice.

Source code analysis

RtpTransportControllerSend is the transmission control object of RTP sender.

RtpTransportControllerSend::RtpTransportControllerSend(
    Clock* clock,
    webrtc::RtcEventLog* event_log,
    NetworkStatePredictorFactoryInterface* predictor_factory,
    NetworkControllerFactoryInterface* controller_factory,
    const BitrateConstraints& bitrate_config,
    std::unique_ptr<ProcessThread> process_thread,
    TaskQueueFactory* task_queue_factory,
    const WebRtcKeyValueConfig* trials)
    : clock_(clock),
      event_log_(event_log),
      bitrate_configurator_(bitrate_config),
      pacer_started_(false),
      process_thread_(std::move(process_thread)),
      use_task_queue_pacer_(IsEnabled(trials, "WebRTC-TaskQueuePacer")),
      process_thread_pacer_(use_task_queue_pacer_
                                ? nullptr
                                : new PacedSender(clock,
                                                  &packet_router_,
                                                  event_log,
                                                  trials,
                                                  process_thread_.get())),
      task_queue_pacer_(
          use_task_queue_pacer_
              ? new TaskQueuePacedSender(
                    clock,
                    &packet_router_,
                    event_log,
                    trials,
                    task_queue_factory,
                    /*hold_back_window = */ PacingController::kMinSleepTime)
              : nullptr),
      observer_(nullptr),
      controller_factory_override_(controller_factory),
      controller_factory_fallback_(
          std::make_unique<GoogCcNetworkControllerFactory>(predictor_factory)),
      process_interval_(controller_factory_fallback_->GetProcessInterval()),
      last_report_block_time_(Timestamp::Millis(clock_->TimeInMilliseconds())),
      reset_feedback_on_route_change_(
          !IsEnabled(trials, "WebRTC-Bwe-NoFeedbackReset")),
      send_side_bwe_with_overhead_(
          !IsDisabled(trials, "WebRTC-SendSideBwe-WithOverhead")),
      add_pacing_to_cwin_(
          IsEnabled(trials, "WebRTC-AddPacingToCongestionWindowPushback")),
      relay_bandwidth_cap_("relay_cap", DataRate::PlusInfinity()),
      transport_overhead_bytes_per_packet_(0),
      network_available_(false),
      retransmission_rate_limiter_(clock, kRetransmitWindowSizeMs),
      task_queue_(task_queue_factory->CreateTaskQueue(
          "rtp_send_controller",
          TaskQueueFactory::Priority::NORMAL)) {
  ParseFieldTrial({&relay_bandwidth_cap_},
                  trials->Lookup("WebRTC-Bwe-NetworkRouteConstraints"));
  initial_config_.constraints = ConvertConstraints(bitrate_config, clock_);
  initial_config_.event_log = event_log;
  initial_config_.key_value_config = trials;
  RTC_DCHECK(bitrate_config.start_bitrate_bps > 0);

  pacer()->SetPacingRates(
      DataRate::BitsPerSec(bitrate_config.start_bitrate_bps), DataRate::Zero());

  if (absl::StartsWith(trials->Lookup("WebRTC-LazyPacerStart"), "Disabled")) {
    EnsureStarted();
  }
}

use_task_queue_pacer_ It is controlled through the configuration information webrtc taskqueuepacer, which is actually process_thread_pacer_ Or task_queue_pacer_ One of. Whether it is PacedSender or TaskQueuePacedSender, its essence is PacingController pacing_controller_.

PacingController

This class implements a leaky bucket packet adjustment algorithm. It handles the logic that determines when to send which packets, but the actual time of processing is done externally (for example, PacedSender). In addition, the forwarding of data packets when they are ready to be sent is also handled externally through the packedsendingcontroller:: PacketSender interface. It has two processing modes:

  1. kPeriodic cycle mode uses the IntervalBudget class to track the bit rate budget, and expects ProcessPackets () to be called a fixed rate, for example, every 5 milliseconds implemented by PacedSender.
  2. The kDynamic dynamic mode allows any time difference between calls to processpackets.
Packingcontroller:: enqueuepacket add packet
void PacingController::EnqueuePacket(std::unique_ptr<RtpPacketToSend> packet) {
  RTC_DCHECK(pacing_bitrate_ > DataRate::Zero())
      << "SetPacingRate must be called before InsertPacket.";
  RTC_CHECK(packet->packet_type());
  // Get priority first and store in temporary, to avoid chance of object being
  // moved before GetPriorityForType() being called.
  const int priority = GetPriorityForType(*packet->packet_type());
  EnqueuePacketInternal(std::move(packet), priority);
}

Add the packet to the queue and call PacketRouter: SendPacket() when the sending time is reached. First, call GetPriorityForType() to get the priority, then call EnqueuePacketInternal to add packet to RoundRobinPacketQueue packet_. queue_ In the queue.

GetPriorityForType

Get priority according to packet type

int GetPriorityForType(RtpPacketMediaType type) {
  // Lower number takes priority over higher.
  switch (type) {
    case RtpPacketMediaType::kAudio:
      // Audio is always prioritized over other packet types.
      return kFirstPriority + 1;
    case RtpPacketMediaType::kRetransmission:
      // Send retransmissions before new media.
      return kFirstPriority + 2;
    case RtpPacketMediaType::kVideo:
    case RtpPacketMediaType::kForwardErrorCorrection:
      // Video has "normal" priority, in the old speak.
      // Send redundancy concurrently to video. If it is delayed it might have a
      // lower chance of being useful.
      return kFirstPriority + 3;
    case RtpPacketMediaType::kPadding:
      // Packets that are in themselves likely useless, only sent to keep the
      // BWE high.
      return kFirstPriority + 4;
  }
  RTC_CHECK_NOTREACHED();
}

Audio always takes precedence over other packet types.
Retransmission packets are sent before new media data.
Video has normal priority. Redundant data occurs simultaneously in the video. If it is delayed, it may be useful with a smaller probability.
The lowest priority packets themselves may be useless. They are sent only to keep BWE high.

PacingController::EnqueuePacketInternal

Add packets to packets according to priority_ queue_ Yes.

void PacingController::EnqueuePacketInternal(
    std::unique_ptr<RtpPacketToSend> packet,
    int priority) {
  prober_.OnIncomingPacket(DataSize::Bytes(packet->payload_size()));

  Timestamp now = CurrentTime();

  if (mode_ == ProcessMode::kDynamic && packet_queue_.Empty() &&
      NextSendTime() <= now) {
    TimeDelta elapsed_time = UpdateTimeAndGetElapsed(now);
    UpdateBudgetWithElapsedTime(elapsed_time);
  }
  packet_queue_.Push(priority, now, packet_counter_++, std::move(packet));
}
Packingcontroller:: processpackets() processes packets
void PacingController::ProcessPackets() {
    ......
  bool first_packet_in_probe = false;
  PacedPacketInfo pacing_info;
  DataSize recommended_probe_size = DataSize::Zero();
  bool is_probing = prober_.is_probing();
  if (is_probing) {
    // Probe timing is sensitive, and handled explicitly by BitrateProber, so
    // use actual send time rather than target.
    pacing_info = prober_.CurrentCluster(now).value_or(PacedPacketInfo());
    if (pacing_info.probe_cluster_id != PacedPacketInfo::kNotAProbe) {
      first_packet_in_probe = pacing_info.probe_cluster_bytes_sent == 0;
      recommended_probe_size = prober_.RecommendedMinProbeSize();
      RTC_DCHECK_GT(recommended_probe_size, DataSize::Zero());
    } else {
      // No valid probe cluster returned, probe might have timed out.
      is_probing = false;
    }
  }
  ......
  
  while (!paused_) {
  ......
    std::unique_ptr<RtpPacketToSend> rtp_packet =
        GetPendingPacket(pacing_info, target_send_time, now);
  ......
    packet_sender_->SendPacket(std::move(rtp_packet), pacing_info);
    for (auto& packet : packet_sender_->FetchFec()) {
      EnqueuePacket(std::move(packet));
    }
    data_sent += packet_size;

    // Send done, update send/process time to the target send time.
    OnPacketSent(packet_type, packet_size, target_send_time);

    // If we are currently probing, we need to stop the send loop when we have
    // reached the send target.
    if (is_probing && data_sent >= recommended_probe_size) {
      break;
    }
  ......
  }
}

Slave bit rate detector prober_ Get the detected recommended sending value in. Then from packet_queue_ POP sends the data with the highest priority until the amount of data already sent is greater than or equal to the recommended value, jumping out of the cycle and waiting for the next round of transmission.
If there are no more messages to be sent in the pacer queue, but the budget can send more data, pacer will supplement the padding message at this time.

Tags: C++ webrtc rtc QoS

Posted by $SuperString on Sat, 07 Aug 2021 04:39:20 +0930