官术网_书友最值得收藏!

  • Mastering C# Concurrency
  • Eugene Agafonov Andrew Koryavchenko
  • 330字
  • 2021-07-09 21:26:06

Optimization strategy

Creating parallel algorithms is not a simple task: there is no universal solution to it. In every case, you have to use a specific approach to write effective code. However, there are several simple rules that work for most of the parallel programs.

Lock localization

The first thing to take into account when writing parallel code is to lock as little code as possible, and ensure that the code inside the lock runs as fast as possible. This makes it less deadlock-prone and scale better with the number of CPU cores. To sum up, acquire the lock as late as possible and release it as soon as possible.

Let us consider the following situation: for example, we have some calculation performed by method Calc without any side effects. We would like to call it with several different arguments and store the results in a list. The first intention is to write the code as follows:

for (var i = from; i < from + count; i++)
  lock (_result)
    _result.Add(Calc(i));

This code works, but we call the Calc method and perform the calculation inside our lock. This calculation does not have any side effects, and thus requires no locking, so it would be much more efficient to rewrite the code as shown next:

for (var i = from; i < from + count; i++)
{
  var calc = Calc(i);
  lock (_result)
    _result.Add(calc);
}

If the calculation takes a significant amount of time, then this improvement could make the code run several times faster.

Shared data minimization

Another way of improving parallel code performance is by minimizing the shared data, which is being written in parallel. It is a common situation when we lock over the whole collection every time we write into it, instead of thinking and lowering the amount of locks and the data being locked. Organizing concurrent access and data storage in a way that it minimizes the number of locks can lead to a significant performance increase.

In the previous example, we locked the entire collection each time, as described in the previous paragraph. However, we really don't care about which worker thread processes exactly what piece of information, so we could rewrite the previous code like the following:

var tempRes = new List<string>(count);
for (var i = from; i < from + count; i++)
{
  var calc = Calc(i);
  tempRes.Add(calc);
}
lock (_result)
  _result.AddRange(tempRes);

The following is the complete comparison:

static class Program
{
  private const int _count = 1000000;
  private const int _threadCount = 8;

  private static readonly List<string> _result = new List<string>();

  private static string Calc(int prm) 
  {
    Thread.SpinWait(100);
    return prm.ToString();
  }

  private static void SimpleLock(int from, int count) 
  {
    for (var i = from; i < from + count; i++)
      lock (_result)
    _result.Add(Calc(i));
  }

  private static void MinimizedLock(int from, int count) 
  {
    for (var i = from; i < from + count; i++) 
    {
      var calc = Calc(i);
      lock (_result)
      _result.Add(calc);
    }
  }

  private static void MinimizedSharedData(int from, int count) 
  {
    var tempRes = new List<string>(count);
    for (var i = from; i < from + count; i++)
    {
      var calc = Calc(i);
      tempRes.Add(calc);
    }
    lock (_result)
      _result.AddRange(tempRes);
  }

  private static long Measure(Func<int, ThreadStart> actionCreator)
  {
    _result.Clear();
    var threads =
      Enumerable
        .Range(0, _threadCount)
        .Select(n => new Thread(actionCreator(n)))
        .ToArray();
    var sw = Stopwatch.StartNew();
    foreach (var thread in threads)
      thread.Start();
    foreach (var thread in threads)
      thread.Join();
    sw.Stop();
    return sw.ElapsedMilliseconds;
  }

  static void Main()
  {
    // Warm up
    SimpleLock(1, 1);
    MinimizedLock(1, 1);
    MinimizedSharedData(1, 1);

    const int part = _count / _threadCount;

    var time = Measure(n => () => SimpleLock(n*part, part));
    Console.WriteLine("Simple lock: {0}ms", time);

    time = Measure(n => () => MinimizedLock(n * part, part));
    Console.WriteLine("Minimized lock: {0}ms", time);

    time = Measure(n => () => MinimizedSharedData(n * part, part));
    Console.WriteLine("Minimized shared data: {0}ms", time);
  }
}

Executing this code on Core i7 2600K and x64 OS in Release configuration gives the following results:

Simple lock: 806ms
Minimized lock: 321ms
Minimized shared data: 165ms
主站蜘蛛池模板: 吉安县| 新龙县| 定远县| 桂林市| 边坝县| 若羌县| 云林县| 德保县| 洛隆县| 汝阳县| 勃利县| 房山区| 泸定县| 阜城县| 彝良县| 郓城县| 工布江达县| 广西| 井陉县| 安化县| 山阳县| 曲阜市| 沅江市| 景洪市| 浙江省| 西平县| 黄骅市| 尤溪县| 莱西市| 贵港市| 荔波县| 通渭县| 金坛市| 莱州市| 长垣县| 福鼎市| 婺源县| 库车县| 赞皇县| 和硕县| 留坝县|