官术网_书友最值得收藏!

  • Mastering C# Concurrency
  • Eugene Agafonov Andrew Koryavchenko
  • 330字
  • 2021-07-09 21:26:06

Optimization strategy

Creating parallel algorithms is not a simple task: there is no universal solution to it. In every case, you have to use a specific approach to write effective code. However, there are several simple rules that work for most of the parallel programs.

Lock localization

The first thing to take into account when writing parallel code is to lock as little code as possible, and ensure that the code inside the lock runs as fast as possible. This makes it less deadlock-prone and scale better with the number of CPU cores. To sum up, acquire the lock as late as possible and release it as soon as possible.

Let us consider the following situation: for example, we have some calculation performed by method Calc without any side effects. We would like to call it with several different arguments and store the results in a list. The first intention is to write the code as follows:

for (var i = from; i < from + count; i++)
  lock (_result)
    _result.Add(Calc(i));

This code works, but we call the Calc method and perform the calculation inside our lock. This calculation does not have any side effects, and thus requires no locking, so it would be much more efficient to rewrite the code as shown next:

for (var i = from; i < from + count; i++)
{
  var calc = Calc(i);
  lock (_result)
    _result.Add(calc);
}

If the calculation takes a significant amount of time, then this improvement could make the code run several times faster.

Shared data minimization

Another way of improving parallel code performance is by minimizing the shared data, which is being written in parallel. It is a common situation when we lock over the whole collection every time we write into it, instead of thinking and lowering the amount of locks and the data being locked. Organizing concurrent access and data storage in a way that it minimizes the number of locks can lead to a significant performance increase.

In the previous example, we locked the entire collection each time, as described in the previous paragraph. However, we really don't care about which worker thread processes exactly what piece of information, so we could rewrite the previous code like the following:

var tempRes = new List<string>(count);
for (var i = from; i < from + count; i++)
{
  var calc = Calc(i);
  tempRes.Add(calc);
}
lock (_result)
  _result.AddRange(tempRes);

The following is the complete comparison:

static class Program
{
  private const int _count = 1000000;
  private const int _threadCount = 8;

  private static readonly List<string> _result = new List<string>();

  private static string Calc(int prm) 
  {
    Thread.SpinWait(100);
    return prm.ToString();
  }

  private static void SimpleLock(int from, int count) 
  {
    for (var i = from; i < from + count; i++)
      lock (_result)
    _result.Add(Calc(i));
  }

  private static void MinimizedLock(int from, int count) 
  {
    for (var i = from; i < from + count; i++) 
    {
      var calc = Calc(i);
      lock (_result)
      _result.Add(calc);
    }
  }

  private static void MinimizedSharedData(int from, int count) 
  {
    var tempRes = new List<string>(count);
    for (var i = from; i < from + count; i++)
    {
      var calc = Calc(i);
      tempRes.Add(calc);
    }
    lock (_result)
      _result.AddRange(tempRes);
  }

  private static long Measure(Func<int, ThreadStart> actionCreator)
  {
    _result.Clear();
    var threads =
      Enumerable
        .Range(0, _threadCount)
        .Select(n => new Thread(actionCreator(n)))
        .ToArray();
    var sw = Stopwatch.StartNew();
    foreach (var thread in threads)
      thread.Start();
    foreach (var thread in threads)
      thread.Join();
    sw.Stop();
    return sw.ElapsedMilliseconds;
  }

  static void Main()
  {
    // Warm up
    SimpleLock(1, 1);
    MinimizedLock(1, 1);
    MinimizedSharedData(1, 1);

    const int part = _count / _threadCount;

    var time = Measure(n => () => SimpleLock(n*part, part));
    Console.WriteLine("Simple lock: {0}ms", time);

    time = Measure(n => () => MinimizedLock(n * part, part));
    Console.WriteLine("Minimized lock: {0}ms", time);

    time = Measure(n => () => MinimizedSharedData(n * part, part));
    Console.WriteLine("Minimized shared data: {0}ms", time);
  }
}

Executing this code on Core i7 2600K and x64 OS in Release configuration gives the following results:

Simple lock: 806ms
Minimized lock: 321ms
Minimized shared data: 165ms
主站蜘蛛池模板: 望城县| 十堰市| 含山县| 蕉岭县| 绩溪县| 九龙城区| 大足县| 滕州市| 柞水县| 高密市| 柞水县| 固阳县| 民县| 高台县| 兰溪市| 剑川县| 临城县| 偃师市| 锦州市| 株洲市| 韩城市| 同心县| 临安市| 大足县| 雅安市| 迁安市| 庐江县| 定远县| 曲周县| 大名县| 白河县| 泾阳县| 绵阳市| 商丘市| 广昌县| 宜兴市| 扬中市| 乐亭县| 瓮安县| 南靖县| 抚远县|