c# - ConcurrentBag vs Custom Thread Safe List -


i have .net 4.5 single instance wcf service maintains collection of items in list have simultaneous concurrent readers , writers, far more readers writers.

i deciding on whether use bcl concurrentbag<t> or use own custom generic threadsafelist class (which extends ilist<t> , encapsulates bcl readerwriterlockslim more suited multiple concurrent readers).

i have found numerous performance differences when testing these implementations simulating concurrent scenario of 1m readers (simply running sum linq query) , 100 writers (adding items list).

for performance test have list of tasks:

list<task> tasks = new list<task>(); 

test 1: if create 1m reader tasks followed 100 writer tasks using following code:

tasks.addrange(enumerable.range(0, 1000000).select(n => new task(() => { temp.where(t => t < 1000).sum(); })).toarray()); tasks.addrange(enumerable.range(0, 100).select(n => new task(() => { temp.add(n); })).toarray()); 

i following timing results:

  • concurrentbag: ~300ms
  • threadsafelist: ~520ms

test 2: however, if create 1m reader tasks mixed 100 writer tasks (whereby list of tasks executed {reader,reader,writer,reader,reader,writer etc}

foreach (var item in enumerable.range(0, 1000000)) {     tasks.add(new task(() => temp.where(t => t < 1000).sum()));     if (item % 10000 == 0)         tasks.add(new task(() => temp.add(item))); } 

i following timing results:

  • concurrentbag: ~4000ms
  • threadsafelist: ~800ms

my code getting execution time each test follows:

stopwatch watch = new stopwatch(); watch.start(); tasks.foreach(task => task.start()); task.waitall(tasks.toarray()); watch.stop(); console.writeline("time: {0}ms", watch.elapsed.totalmilliseconds); 

the efficiency of concurrentbag in test 1 better , efficiency of concurrentbag in test 2 worse custom implementation, therefore i’m finding difficult decision on 1 use.

q1. why results different when thing i’m changing ordering of tasks within list?

q2. there better way change test make more fair?

why results different when thing i’m changing ordering of tasks within list?

my best guess test #1 not read items, there nothing read. order of task execution is:

  1. read shared pool 1m times , calculate sum
  2. write shared pool 100 times

your test # 2 mixes reads , writes , why, guessing, different result.

is there better way change test make more fair?

before start tasks, try randomising order of tasks. might difficult reproduce same result, may closer real world usage.

ultimately, question difference of optimistic concurrency (concurrent* classes) vs pessimistic concurrency (based on lock). rule of thumb, when have low chances of simultaneous access shared resource prefer optimistic concurrency. when chances of simultaneous access high prefer pessimistic one.


Comments

Popular posts from this blog

Java 3D LWJGL collision -

spring - SubProtocolWebSocketHandler - No handlers -

methods - python can't use function in submodule -