stephenbrooks.orgForumMuon1GeneralBenchmarking with Mpts thread
Username: Password:
Page 1 2 3 4 5 6 7 8 9 10 11 12 13
Search site:
Subscribe to thread via RSS
sssf
2007-05-16 09:45:38
It might be that CPU 0 and 2 are physically the same processor (also for 1 and 3).
This is caused by windows finding 2 sockets with 2 cores at first; they become cpu 0 and cpu 1. (cpu 0 is first core in socket 1, cpu 1 is first core socket 2)
After windows found out it are dual cores processors it adds the second cores of both processors to cpu 2 & 3. (cpu 2 for second core socket 1, cpu 3 for second core socket 3)
So you end up with this situation:
socket 1: first core: cpu 0, second core: cpu 2
socket 2: first core: cpu 1, second core: cpu 3

This story holds for dual socket motherboards running intel HT processors, I'm not sure if this also is the case for AMD processors because they physically have 2 cores in each processor and not the HT "trick".
This could even explain why your results with affinity set produces lower results compared to no affinity set.

It might be worth the effort to check how windows distributes the numbering for the cpu's.
To do this you could try to run 2 clients with 2 threads, client 1 affinity set to cpu 0 and 1, client 2 affinity set to cpu 1 & 3

[TP]Skatoony
2007-05-17 09:53:18
Here are my results for an AMD 64 X2 4800+ (socket 939, 2.4GHZ, 200FS:

One client in multi-threaded mode: Average 340kpts/sec
Two clients in single-threaded mode: Average 230kpts/s per client

Seems running a client on each core is better (they're in seperate folders and also re-named the executable to make sure there wasn't any problems).
[TP]Skatoony
2007-05-17 09:55:10
After checking the table on page 5, something isn't right.  Why do I have the slowest dual-core CPU, yet it should *technically* be faster?  That makes no sense :/
Pascal
2007-05-20 10:55:05
As there is still no good description anywhere about the benchmarking tool I am not going to test any more data.
Stephen Brooks
2007-05-20 14:31:21
How do I use the Muon1Bench.exe program?  First, make sure sample files downloading is switched off in config.txt.  Then put the program in your Muon1 directory (same place as muon1.exe) and simply start and leave it running while Muon1 is working.  It will produce a file BenchCSV.log which has three columns: one logging the time in seconds, the second showing the number of new Mpts Muon1 has produced and the third showing a running estimate of the speed in units of kpts/sec (1000 kpts = 1 Mpts).  Once you've run it for a day or so (enough for the estimate to be stable), post your results to the benchmarking thread in the forum.

On a dual-core machine you can either start one client with auto threads (this is what most people do) and run the benchmark program alongside for a couple of days.  Or you can create two Muon1 directories, each with a benchmark in them and set to 1 thread, this may end up being slightly faster when you add the totals together.
Pascal
2007-05-20 18:36:52
Stephen,
nice to get an answer, but:
1. One client with two threads seems to be less efficient than two clients, each with a single thread.
2. If the autosave interval time is too low, there are lines with zero values in it.  How can I prevent this, separated from the autosave function?
Stephen Brooks
2007-05-20 18:58:31
1. Yes, correct.

2. I haven't heard of this bug before.  How low does the auto-save have to be set for this to happen?
Pascal
2007-05-21 03:43:35
I set it to 300 seconds, because I want this for the clients.  Just try.
Stephen Brooks
2007-05-22 11:57:51
It's strange, 300 seconds ought to be fine.  Anyone else having this problem?  I should probably make the units "minutes" rather than seconds, as obviously setting it to "1" second it might end up taking more than that long to save.
Pascal
2007-05-22 12:20:15
Now I set it to 900 seconds, with the result as shown below:

73249,201408.0,145.37
73849,201524.5,161.57
74450,201686.0,188.41
78653,202412.6,178.51
79254,202537.3,180.94
Uptime (secs),Mpts in file,Estimate kpts/sec
700,202750.0,0.00
1301,202856.1,176.68
1901,202973.3,185.92
2502,203054.1,168.80
Uptime (secs),Mpts in file,Estimate kpts/sec
Uptime (secs),Mpts in file,Estimate kpts/sec
9641,14584.3,0.00
12042,14988.4,168.36
Uptime (secs),Mpts in file,Estimate kpts/sec
2024,54302.7,0.00
2624,54451.5,247.96
Uptime (secs),Mpts in file,Estimate kpts/sec
Uptime (secs),Mpts in file,Estimate kpts/sec
54784,89438.9,0.00
55684,89625.0,206.75
Xanathorn
2007-05-22 12:37:40
I never had any problems using the muonbench program, no matter what interval I set it to.  Last time it ran for 4 days straight without a single glitch.  I'm using windows XP pro SP2 though, might be something with windows 2000 Pascal is using?
Pascal
2007-05-22 12:44:36
Xanathorn: I also have XP with SP2.
Xanathorn
2007-05-22 12:48:05
Ah ok then.  I will go to work now but will start up the muonbench on both clients, one with 300 sec and the other with 900 sec interval for the next 9 hours.  I'll see when I come back from work if anything strange comes up.
Xanathorn
2007-05-22 22:42:33
Ok I'm just back from work and looked into the benchmark output.  It looks like there are no anomalies, both muonbenches ran for 10 hours straight.  Note though this ain't my pc's maximum output, my mother likes to play java games when I'm at work so it eats some cpu power .

Both benchmark files:
Core 1: http://xanathorn.demon.nl/muon/benchcsv_client1.txt
Core 2: http://xanathorn.demon.nl/muon/benchcsv_client2.txt
Xanathorn
2007-05-22 22:47:00
Oh yeah "Core 1" had the interval set to 300 seconds and "Core 2" set to 900 seconds.
Pascal
2007-05-23 16:02:07
Core 1:
2498,106166.1,62.83
3698,106654.4,234.85
6098,107143.0,219.20
8498,107729.5,227.58
9699,108021.4,229.82
10899,108271.4,227.13
13299,108896.9,233.82
14499,109245.3,238.96
15699,109569.0,241.52
16900,109595.6,224.65
18100,110084.1,237.67
20500,110733.5,241.78
22900,111222.0,237.53
24100,111512.7,237.78
25301,111832.9,239.23
27701,112321.5,235.99
28901,112810.0,243.42


Core 2:
2514,105491.2,135.82
6114,106398.3,222.91
7314,106713.7,230.89
9715,107202.3,223.08
12115,107853.8,233.83
13315,108015.2,223.89
14515,108503.6,240.53
16915,108992.2,234.85
18115,109313.5,237.19
19373,109664.7,240.12
20573,109987.1,241.90
21773,110249.6,240.54
22973,110530.9,240.20
24174,110857.3,241.87
25374,110908.0,231.91
26574,111396.6,240.23
27774,111718.9,241.52
28974,111978.3,240.42
30174,112302.5,241.65

Seems to be something like 480 kpts/sec.
Ok, the CPU runs with 2,47 GHz. 
[TA]Assimilator1
2007-05-24 14:54:53
I must get round to re-testing my Sempron3100 to reclaim the 2nd spot in the single CPU chart

Pascal
>>>>How do I use the benchmark and the client correctly on an AMD Athlon X2?  There is no description anywhere how to use<<<<
Yes their is ,in this thread ,as other people have already said & done it.
Btw ,what do you think a non official replie is going to do to your rig/DPAD clients?  cause a thermo-nuclear detonation??
Seriously though ,the worst that can happen is some lost Mpts by not choosing the most effiecient method (which is 1 client per core).
Also I've never heard of v5.

>>>nice to get an answer<<<
He had already answered you if you read the thread properly

Afraid I can't help you with those '0.0' lines (wierd) ,I've not had that problem either & my save interval was 120s.
Sounds like you've got some other problem ,no idea what though ,your PC doesn't randomly crash does it?

[DPC] Eclipse~NaWA
Nice graphs ,though I wondered why you tested 3 threads on 2 clients??
You've gained a decent boost in output by finding the best method

TurtleBlue
So you've o/ced the CPU by about 250MHz & it's made no difference??  ,that doesn't makde sense unless you've had to back off the RAM timings alot ?,how long did you run the benchmark for?

[TP]Skatoony
Your 460 Kpts/s total seems to fall in to the right area on the graph ,the only odd score is OcUK diogenese ,maybe his rig is o/ced & doesn't say?
Though JonBs score seems a little low ,maybe he had just a single client?.
Pascal
2007-05-24 16:21:48
[TA]Assimilator1:
It definitely sucks if you have to klick more than three times to find the necessary information for the correct use of the benchmark.  The FAQ site also is incorrect regarding the use of the results.dat file.  Stephen mentioned a client in v5 some years ago.
Maybe he himself is not able the answer my question.  He forgot to mention the saving interval when using the benchmark.  There is still a need in documentation.
The client does not crash, but after a restart of the benchmark, the first two rows of the log file always look like this:

Uptime (secs),Mpts in file,Estimate kpts/sec
61786,199237.1,0.00

I do not know for what these lines are really necessary.  It rather would be nice to have an output of the average computing speed over a longer time from this program.
Xanathorn
2007-05-24 23:05:37
Could be me but uhm, if u would like an average over say 2 weeks just let the benchmark program run for 2 weeks in a row without restarting it? 
Those two lines u see is very normal, every time u close the benchmark program down and start it again it starts a fresh benchmark, whereas the first line gives 0 kpts/sec because that's the point where u started the benchmark and after 0 seconds ofcourse u can't give a kpts/sec estimate (or it would be infinite).
[TA]Assimilator1
2007-05-25 13:12:16
The auto saving interval doesn't usually effect the benchmark ,that's why it isn't in the FAQ.
Stephens added the info for multicore CPUs to the FAQ
No idea about the results.dat & the FAQ ,what's that about?

And the reason you're getting 0.0 in the benchmark file is because you're re-starting it!  lol,your supposed to leave them running for a day whilst benchmarking ,from the FAQ >>>Once you've run it for a day or so<<<
Stephen Brooks
2007-05-25 16:15:24
--[It definitely sucks if you have to klick more than three times to find the necessary information for the correct use of the benchmark.]--

Yes, I suppose it does take more than three clicks, if you count the double click to start your web browser as two...
Pascal
2007-05-25 17:34:32
-- [The auto saving interval doesn't usually effect the benchmark ,that's why it isn't in the FAQ.]
Nice to know, but the time interval for the benchmarking program is still missing in the FAQ. 
Stephen, it really sucks, because you are not able to write a manual for your own tools up to now.  Must I really put a finger in your little *** ??
HaloJones
2007-05-25 21:45:39
Pascal, I don't know if English is your first language or not, but would you quit with the abuse.  If you don't like this project, why not leave?
[TA]Assimilator1
2007-05-28 11:55:25
Pascal
What are you on about??  their is a manual for the benchmarking tool ,it's in the FAQ ,ok so it could be zipped along with the program but its definitley their!

>>>but the time interval for the benchmarking program is still missing in the FAQ<<<

A?  what time intereval?

Stephen
lol
[TA]Assimilator1
2007-05-28 11:57:40
Damn this forum for having no editing function (typos)

*zipped up along with
Pascal
2007-05-28 12:59:58
The simple information
muon1bench.exe 900

900 = time interval in seconds. 

For that little information you have to click to much.  I like the British people because of their specialized kind of thinking.  Thanks very much for this.  I go back to Folding@home, since here are too many stupid holes.
[TA]Assimilator1
2007-05-28 14:07:37
Ok after running the benchmark for the 1st time in some months I see what interval you're refering to now (my bad memory ) ,you're right that that info is not in the FAQ ,but TBH I've always just left it at default.
Though it would be nice to know how & if it effects scores ,Stephen? 

Oh & btw ,the number of clicks to find the benchmarking FAQ from the DPAD front page is 1 ,true that info about multicore PCs has only recently been added ,but if the default install of 1 DPAD client for all cores had been used anyway it still would of measured multi core performance just as accuratly.

It seems to me that you haven't been running any sort of DC project for any length of time, or you haven't looked into them much ,they don't all hold the info about their clients on the front page ,often you have to dig around forums to find more obscure or finer details of the project.
Oh & talking about benchmarking ,have you ever tried to benchmark BOINC SETI?  a few months ago I was looking into that & it turns out their is currently no way of benchmarking it (ignoring the useless MIPS etc test that BOINC has).
SETI classic was easy to benchmark inccidently ,you just grabbed an 'average' WU from the TLC website ,stopped the client from sending off results & crunched the WU & noted the recorded time ,though that info was not available from the SETI site.(Some day I might get around to seeing if that can be done with SETI BOINC).

Anyway ,GL with F@H

[TA]JonB
2007-05-29 03:26:05
Now they know how many holes it takes to fill the Albert Hall, minus one.
Pascal
2007-05-29 09:10:04
Seti was my first project, but I did never see any sense in it.  So I left it and started with UD Cancer Research many years ago, as I heard about it from my father.  BOINC definitely sucks too much because of the allocation problem of processes to CPU cores.  So I rather take smart clients and have tested at least 30 different DC projects in the past years. 
And I already have said what sucks with Muon.  It is not good that Stephen is not able to take this problem to himself and just correct the documentation.  There is already to much said but not done. 
And in the stats I am still missing many particle steps. 
Thanks.
Stephen Brooks
2007-05-29 12:41:41
I'd completely forgotten that Muon1bench had an option to set its poll interval.  I thought it was fixed.  You shouldn't have to change it from the default, that's why there's no documentation.
[TA]Assimilator1
2007-05-29 20:06:37
Ok I've finally got around to benchmarking my Sempron 3100 again ,its specs 2.5GHz ,FSB 278MHz ,RAM 227MHz.
Was running for about 18hrs.

33392,94515.6,264.55
33693,94614.6,286.32
33993,94713.8,297.37
34593,94812.7,253.16
35193,94972.0,256.22
35493,95067.5,263.10
35794,95120.2,254.35
36094,95218.0,260.85
36694,95381.6,262.65
36994,95480.2,267.36
37594,95627.1,264.53
38195,95787.9,264.90
38495,95886.8,268.30
38795,95907.2,258.28
39395,96068.7,259.26
39695,96167.5,262.30
40296,96329.0,262.84
40896,96484.9,262.61
41196,96583.8,265.00
41796,96745.5,265.29
42096,96843.6,267.28
42397,96916.9,266.56
42997,97072.1,266.08
43597,97233.4,266.23
46899,98044.1,261.39
47199,98142.6,262.78
47799,98302.3,262.91
48099,98399.0,264.07
48700,98554.4,263.88
49000,98574.6,260.24
49300,98689.5,262.47
49900,98848.8,262.57
50200,98947.1,263.69
50801,99101.6,263.48
51401,99200.3,260.28
51701,99285.8,260.67
52001,99384.8,261.75
52602,99519.9,260.64
52902,99618.6,261.66
53202,99717.6,262.66
53802,99879.8,262.88
54102,99963.6,263.11
54703,100120.7,263.07
55303,100219.7,260.45
55903,100398.5,261.42
56203,100497.2,262.29
56503,100594.8,263.08
57104,100691.1,260.55
57704,100853.7,260.80
58004,100917.1,260.21
58304,100999.7,260.38
58905,101196.2,261.92
59505,101357.4,262.07
59805,101454.9,262.77
60405,101553.8,260.64
61006,101716.0,260.84
61606,101876.5,260.98
61906,101994.7,262.35
62506,102143.9,262.07
63107,102240.5,260.06
63707,102401.4,260.22
64307,102583.8,261.05
64908,102747.2,261.26
65508,102909.4,261.42
67309,103402.6,262.07
67909,103562.6,262.15
68209,103630.0,261.83
68509,103725.4,262.30
69410,103884.1,260.18
69710,103982.5,260.73
70310,104142.8,260.83
70911,104304.6,260.97
71511,104458.9,260.91
72111,104618.3,260.98
72411,104716.3,261.48
72711,104799.4,261.59
73312,104957.1,261.61
73912,105118.1,261.71
74512,105278.0,261.77
75113,105440.4,261.90
77214,105933.6,260.61
77514,106032.4,261.07
79315,106525.4,261.56
79615,106567.6,260.79
79915,106624.7,260.34
80215,106722.3,260.75
80515,106821.4,261.19
81116,106984.5,261.32
81416,107064.0,261.34
81716,107161.0,261.72
82316,107259.2,260.53
82917,107420.9,260.63
83217,107518.9,261.03
83517,107617.2,261.42
85618,108151.4,261.13
85918,108249.3,261.50
86519,108346.0,260.38
86819,108443.5,260.74
87419,108629.5,261.28
88019,108728.4,260.23
88319,108827.0,260.60
90120,109319.6,261.00
90421,109370.6,260.53
90721,109468.5,260.87
91021,109567.2,261.22
91621,109665.8,260.23
91921,109764.4,260.57
92522,109942.4,260.94
93122,110102.6,260.99
93722,110201.2,260.04
94023,110291.9,260.25
94323,110390.9,260.59
94923,110554.9,260.71
95523,110716.0,260.78
95824,110803.3,260.93
96424,110967.3,261.04
97024,111065.9,260.14
97324,111164.6,260.46
97625,111263.3,260.77
98225,111362.0,259.89
98525,111504.5,260.87
99125,111664.5,260.92
100026,111824.1,259.80
100326,111866.4,259.27
100926,111965.0,258.43
101227,112005.8,257.90
101827,112104.5,257.08
102427,112229.4,256.66
103028,112389.6,256.75
104828,112883.0,257.18
105429,112980.8,256.40
106029,113143.5,256.52
106629,113302.4,256.59
108430,113795.6,257.00
108731,113883.7,257.14
109331,114003.1,256.68
109931,114164.7,256.78
110231,114261.3,257.03
110532,114321.7,256.82
111132,114500.9,257.14
111732,114626.6,256.77
112333,114786.4,256.85
112633,114860.6,256.81
113233,115022.8,256.91
113833,115179.9,256.94
114434,115335.0,256.95
114734,115410.4,256.93
116535,115903.8,257.30
117435,116065.5,256.47
117736,116173.5,256.84
118036,116272.4,257.09
118636,116431.0,257.14
119236,116529.9,256.50
119837,116739.0,257.14
120437,116900.1,257.21
121037,117062.1,257.30
121638,117199.1,257.10
121938,117290.3,257.26
122538,117450.4,257.32
123139,117546.8,256.68
123439,117663.3,257.11
124039,117760.3,256.49
124639,117959.4,256.98
125240,118119.5,257.04
125840,118274.1,257.04
126140,118372.3,257.27
127041,118534.8,256.53
127341,118632.6,256.75
127941,118810.1,257.00
128241,118909.1,257.23
128842,119071.5,257.31
129142,119103.7,256.84
129742,119250.7,256.77
130043,119335.8,256.85
130343,119426.1,256.99
130643,119523.7,257.20
130943,119554.2,256.72
131243,119610.0,256.50
131844,119787.7,256.74
132144,119887.2,256.97
132744,120049.9,257.05
133345,120193.4,256.95
135446,120686.3,256.49
135746,120783.7,256.69
136346,120925.9,256.57
136947,121077.7,256.55
137247,121180.0,256.79
137847,121329.9,256.75
139648,121823.0,257.04
140549,121970.3,256.26
141149,122129.5,256.31
141750,122227.7,255.80
142050,122246.6,255.27

At a guess ,that's seems to be about 258 Kpts/s ,if correct then it is also 15% slower than the previous client.

[TA]Assimilator1
2007-06-15 19:21:56
I've upgraded my XPM 2500 @2.5GHz with a C2D (E6420) setup now
I haven't found its optimal speed yet but I have benchmarked DPAD at a couple of speeds.

Benchmarking one of the 2 clients with 1 thread set on each I get ~220 Kpts/s @2.5GHz ,at 2.8GHz I get ~246 Kpts/s
The XPM @2.5GHz did 234 Kpts/s ,& as seen above my Sempron @2.5GHz does ~258 Kpts/s.
I knew that C2Ds didn't do great at DPAD compared to Ath64s but really it should be faster than Athlon XPs!  (1 client vs 1 client),seems to me like DPAD needs better 'optimising' for C2Ds seeing as C2Ds are on average 20% faster clock for clock than Ath X2s on most apps.

Stephen if you want the scores for the graph LMK,I ran the benchmark for about 18hrs with the C2D @ 2.5GHz ,but at the 2.8GHz speed I only ran it for 3-4hrs I think ,so that's probably no use to you.
Atm my CPU is clocked at 2.8GHz but sometime soon I will be upping it further ,at least 3.1Ghz maybe more ,anyway just to let you know incase you only wanted the final scores.
Stephen Brooks
2007-06-20 16:26:52
LMK?  I've actually used your 2.8GHz result in here, assuming the total throughput is 2*246 = 492 Mpts/s.

Stephen Brooks
2007-06-20 16:31:58
I can't label "quad" and "dual" in neat sections any more because TurtleBlue's budget dual-core AM2 system pokes below the best single core (the insanely overclocked 3000+). 
[TA]Assimilator1
2007-06-21 07:19:57
LMK=Let Me Know

Forgot to mention ,at 2.8GHz the FSB is 350MHz.Can't remember what the RAM was at ,somewhere between 380-400 MHz (I'll check latter on).
[TA]Assimilator1
2007-06-21 08:14:59
RAM was at 400 MHz 4-4-4-15

Good to see my 'lowly' Semp 3100 as the 2nd fastest single core CPU
Xanathorn
2007-06-21 15:07:15
I was able to tickle my RAM a little bit more to get the client output to 2*263 kpts/sec (526 kpts/sec).  May not be much but every little bit counts .

http://xanathorn.demon.nl/muon/benchcsv2.txt
Stephen Brooks
2007-06-21 16:13:02
I guess you probably need to bench both clients over the same period to make that quite correct, though!
Xanathorn
2007-06-21 16:22:49
Oops forgot to paste the other core too, it was over the same period..
http://xanathorn.demon.nl/muon/benchcsv2_2.txt
K`Tetch
2007-06-24 17:52:05
Just a suggestion, stephen, but would not a full graph, on a seperate page all to itself not be better now?  you can just give a message saying 'updated' here
[TP]Skatoony
2007-06-25 01:54:50
If people wish to run multiple clients for max work output, here is how I've set my X2 up:

C:Muon1
-Muon1_1
-Muon1_2

...and then simply extracted a new client (or if you have one that's currently being used, just pop it into either directory but start a fresh new one for the other cores/CPUs).

If you want the clients to startup by using a single icon, here's a batch script that will do the trick (however you'll need to place a config.txt file into the directory where the batch file is, eg C:Muon1 for settings to take effect - REMEMBER TO SET THE CONFIG TO 1 PROCESS!).

Here is the batch file I made that goes into DoftwareMuon1 on my X2:

start Muon1_1muon1 -c
start Muon1_2muon1 -c

...and that should start up both clients without a hitch.

Add more directories/batch file lines for more clients, although you'll have more command prompts popping up =P

You'll have to fix the P4 EE I reported before Stephen - it isn't a Northwood, it's a Gallatin (took ages for CPU-Z to report it correctly).  I'm also currently testing if running 2 clients on HT is better than one client is multithreaded mode.  Will have results sometime tomorrow.
[TP]Skatoony
2007-06-25 01:57:19
Urgh, that crapped out badly.  Should be:

start Muon1_1(backwards slash)muon1 -c
start Muon1_2(backwards slash)muon1 -c
Stephen Brooks
2007-06-25 02:48:22
Eww something's wrong with my post interpretator
[DPC] Eclipse~Lord Alderaan
2007-06-25 14:30:13
Could u add the following under the nick [DPC] Eclipse~Maversun next time Stephen?

2x Intel® Xeon® 5120 dual-core Processor 1,86 GHz (1066 FSB, 4MB Cache)
4x 512MB (PC2 5300 Buff DDR2) (don't have latencies etc at hand)

Core1:

Uptime (secs),Mpts in file,Estimate kpts/sec
20732,127686.1,0.00
21332,127798.5,187.08
21933,127880.7,161.95
22534,128019.3,184.86
23736,128207.8,173.66
29444,129184.3,171.97
32748,129758.8,172.49
33349,129823.2,169.38
36653,130398.8,170.38
37254,130502.0,170.43
38456,130686.1,169.26
41760,131261.7,170.04
42661,131415.5,170.06
43262,131527.8,170.51
46867,132104.2,169.05
48069,132296.8,168.66
51373,132871.5,169.23
51974,132984.8,169.60
52274,133028.1,169.36
52575,133057.0,168.67
53476,133236.6,169.51
57081,133813.9,168.58
57682,133928.9,168.95
57982,133964.7,168.55
58583,134079.8,168.92
61887,134654.2,169.31
63089,134843.6,168.98
63390,134914.6,169.45
63990,135025.7,169.67
67595,135601.8,168.91
68496,135786.7,169.59
71801,136361.5,169.88
73003,136549.9,169.57
78710,137523.1,169.67
79011,137552.4,169.29
82316,138129.4,169.58
82917,138221.3,169.42


Core2:

Uptime (secs),Mpts in file,Estimate kpts/sec
24044,128146.0,0.00
25246,128334.9,157.23
30953,129310.8,168.61
32154,129501.0,167.08
33355,129712.8,168.27
34557,129902.4,167.07
35158,130008.7,167.61
38762,130576.5,165.14
39062,130620.7,164.78
39963,130809.6,167.32
41165,130998.5,166.61
42366,131210.5,167.26
42967,131321.8,167.83
43568,131405.1,166.93
49275,132381.1,167.86
49575,132421.8,167.48
55582,133397.7,166.52
56484,133587.8,167.75
57385,133700.2,166.59
57685,133745.4,166.45
60989,134312.4,166.91
61590,134421.1,167.13
67297,135397.0,167.64
67898,135492.5,167.52
68498,135575.9,167.14
69700,135767.0,166.92
70601,135956.6,167.77
71802,136145.8,167.51
75106,136709.2,167.70
81114,137683.2,167.11
82016,137869.9,167.74
83217,138059.5,167.53
84119,138222.0,167.72
87724,138788.2,167.12


Core3:

Uptime (secs),Mpts in file,Estimate kpts/sec
21051,127191.0,0.00
22253,127402.6,176.10
22854,127478.7,159.62
26158,128043.1,166.86
26458,128061.3,160.95
29763,128628.3,164.98
30364,128743.8,166.74
31565,128952.1,167.50
31866,129002.3,167.49
32466,129066.8,164.32
33067,129182.3,165.72
36371,129773.1,168.54
36672,129791.1,166.45
36972,129843.5,166.60
38174,130039.8,166.38
39375,130242.6,166.53
39676,130285.8,166.17
40277,130398.6,166.84
43581,130965.6,167.54
43881,130982.7,166.08
44482,131079.1,165.94
45383,131268.0,167.56
51391,132250.8,166.77
51692,132313.4,167.18
52292,132425.7,167.56
52893,132489.5,166.40
53494,132601.1,166.76
54695,132808.1,166.96
54996,132857.7,166.94
56198,133044.4,166.54
56498,133081.9,166.19
56799,133173.8,167.36
58000,133362.8,167.03
61305,133930.0,167.41
67013,134906.0,167.86
67313,134922.3,167.12
67914,135034.0,167.36
69115,135225.0,167.15
70017,135414.3,167.94
70317,135427.9,167.19
70618,135470.9,167.05
71519,135657.8,167.77
72720,135847.6,167.54
73922,136037.1,167.32
74823,136227.2,168.05
78128,136794.3,168.25
79029,136907.3,167.59
82334,137474.4,167.80
82634,137534.2,167.96
82934,137575.4,167.81
84136,137767.2,167.65
85338,137995.3,168.06
85638,138035.0,167.90
86840,138224.4,167.71


Core4:

Uptime (secs),Mpts in file,Estimate kpts/sec
23465,128435.8,0.00
24065,128548.1,186.95
27370,129113.4,173.53
27970,129186.3,166.57
28571,129313.7,171.92
29772,129482.0,165.86
35479,130454.7,168.04
41186,131430.6,169.00
41486,131489.0,169.42
45090,132075.3,168.30
45991,132186.7,166.51
46892,132376.5,168.21
47794,132491.4,166.70
48695,132642.8,166.75
49596,132832.7,168.26
50497,132970.5,167.75
51698,133158.9,167.29
51999,133247.1,168.62
52900,133349.9,166.95
53801,133535.2,168.10
57405,134111.3,167.22
58607,134301.4,166.91
64314,135276.6,167.47
64915,135389.4,167.76
65816,135577.4,168.63
71823,136554.9,167.89
72724,136744.7,168.68
73926,136934.9,168.43
77230,137501.9,168.63
80534,138068.7,168.79
80834,138096.9,168.40
81435,138198.2,168.40
82636,138384.5,168.13
82937,138444.5,168.29
86241,139009.8,168.44
86842,139122.4,168.62
87142,139139.3,168.09
tomaz
2007-06-29 07:40:29
Hello.
Results of Muonbench 1800 for Athlon64 x2 5200+ (2.61 GHz) on WXP pro, 2GB DDR2 800 RAM.

1 instance 2 threads
Uptime (secs),Mpts in file,Estimate kpts/sec
102654,10063.9,0.00
104454,10938.6,485.92
106255,11511.1,401.98
108055,12702.7,488.64
109855,13394.6,462.57
111655,14302.0,470.88
113455,15746.7,526.16
115255,16191.3,486.28
117055,17216.5,496.68
118855,18667.5,531.06
120655,19141.5,504.29
122455,20026.1,503.12
124255,21095.5,510.70
126056,22412.6,527.70
127856,23451.2,531.22
129656,23934.0,513.68
131456,25253.7,527.40
133256,25948.3,519.07
135056,27224.7,529.63
136856,28096.5,527.24
138656,29207.7,531.74
140456,30080.9,529.52
142257,30967.6,527.84
144057,31990.3,529.59
145857,32799.6,526.26
147657,33974.0,531.31
149457,34441.9,520.87
151257,35470.0,522.73
153057,36779.2,530.04
154857,37611.9,527.71
156657,38256.4,522.05
158458,39756.3,532.09
160258,40478.2,528.00
162058,41655.7,531.82
163858,42084.3,523.18
165658,43150.6,525.16
167458,44501.4,531.41
169258,45166.5,527.04
171058,45919.7,524.18
172858,47194.1,528.89
174659,48165.3,529.16
176459,49321.8,531.92
178259,49916.6,527.12
180059,51189.4,531.31
181859,52175.6,531.68
183659,53163.6,532.06
185459,53743.0,527.49
187259,54716.9,527.78
189060,55502.6,525.88
190860,56988.3,531.99
192660,57982.0,532.39
194460,58663.5,529.38
196260,59504.5,528.18
198060,60379.6,527.39
199860,61696.4,531.17
201660,62274.6,527.35
203460,63657.5,531.65

Average of "flat part" is 529.

Regards.
tomaz
2007-07-13 20:34:54
Results of Muonbench 1800 for Turion64 ML40, 2.2 GHz on WXP pro

Uptime (secs),Mpts in file,Estimate kpts/sec
4605,324.1,0.00
6405,1018.8,385.93
8205,1134.7,225.16
10005,1803.7,273.99
13605,2772.9,272.08
15405,3002.2,247.96
17206,3606.0,260.45
20806,4416.4,252.59
22606,4745.1,245.59
24407,5324.2,252.51
26207,5692.0,248.49
28007,6142.2,248.62
29807,6332.3,238.40
31607,6943.6,245.15
33407,7288.3,241.80
35207,7867.8,246.51
37007,8140.8,241.24
38807,8719.9,245.48
40607,8835.4,236.41
42407,9413.4,240.44
46008,10070.5,235.41
47808,10652.0,239.06
51408,11373.6,236.09
53208,11951.7,239.24
55008,12214.2,235.90
56808,12793.2,238.86
58608,13169.3,237.86
60408,13624.2,238.34
64008,14319.7,235.60
65808,14898.5,238.13
67608,15014.3,233.16
69408,15596.8,235.68
71208,16222.3,238.70
73009,16600.7,237.95
74809,16924.8,236.46
76609,17503.7,238.59
78409,17845.6,237.41
80209,18005.4,233.87
82009,18698.7,237.38
83809,18814.6,233.45
85609,19411.5,235.63
87409,20006.7,237.70
89209,20457.4,237.97
91010,20564.3,234.25
92810,21288.8,237.68
94610,21498.9,235.26
96410,22078.9,236.97
98210,22541.5,237.35
101810,23208.4,235.42
103610,23787.4,236.99
107210,24713.3,237.70
109011,25123.2,237.53
110811,25277.0,234.95
112611,25985.8,237.60
114411,26295.6,236.52
116211,26518.3,234.70
118011,27096.4,236.07
119811,27676.8,237.42
121611,27872.8,235.45
123411,28468.7,236.89
127012,29180.9,235.75
128812,29760.1,236.99
130612,29935.6,235.00
132412,30517.4,236.24
136012,31331.2,235.96
137812,31910.0,237.12
139612,31953.6,234.28
141412,32638.1,236.20
145013,33227.8,234.34
146813,34027.3,237.00
148613,34395.4,236.59
150413,34533.7,234.62
152213,35253.9,236.64
154013,35508.5,235.49
155813,36087.4,236.52
157613,36315.3,235.22
159414,36910.5,236.33
163014,37501.9,234.70

Average of "flat part" is 236.
[XS]riptide
2007-07-15 12:13:04
Time for a new Graph

Uptime (secs),Mpts in file,Estimate kpts/sec
20832,489596.2,0.00
21133,489708.2,371.86
21434,489737.3,234.25
21735,489897.2,333.14
22036,489974.2,313.78
22340,490079.7,320.47
22642,490158.7,310.79
22943,490268.0,318.23
23244,490375.1,322.90
23545,490429.0,306.92
23848,490562.5,320.34
24450,490749.9,318.81
24752,490833.3,315.59
25053,490861.9,299.85
25354,491019.8,314.79
25655,491107.4,313.30
27466,491698.6,316.90
27767,491762.9,312.41
28068,491872.6,314.56
28370,492009.9,320.21
28671,492036.3,311.28
28972,492185.5,318.09
29273,492293.5,319.53
29574,492401.4,320.87
29875,492459.5,316.61
30177,492594.4,320.84
30478,492703.6,322.14
30779,492768.3,318.89
31080,492901.3,322.50
31381,492929.8,315.99
31682,493059.6,319.18
31984,493191.0,322.35
32285,493264.8,320.31
32586,493375.6,321.53
32887,493472.8,321.56
33188,493557.5,320.58
33489,493655.7,320.71
33791,493764.1,321.62
34092,493790.5,316.31
34393,493897.9,317.20
34694,494029.3,319.79
34995,494099.8,317.97
35296,494208.4,318.86
35598,494339.8,321.25
35899,494409.1,319.43
36200,494544.7,321.99
36501,494631.4,321.34
36802,494681.2,318.39
37104,494836.3,322.03
37405,494882.6,318.97
37706,495014.0,321.07
38007,495062.7,318.27
38308,495167.2,318.77
38610,495315.9,321.73
38911,495378.3,319.82
39212,495485.6,320.42
39513,495594.4,321.08
39815,495695.9,321.32
40116,495778.0,320.56
40417,495885.9,321.14
40718,495992.4,321.63
41020,496092.2,321.78
41324,496147.3,319.69
41625,496267.9,320.86
41926,496356.1,320.46
42227,496451.9,320.43
42528,496554.1,320.69
42830,496582.3,317.58
43131,496713.7,319.18
43432,496851.3,321.02
43733,496879.6,318.03
44034,497038.6,320.75

Average for all (less 0 value at first reading top) = 318.4677

This is for 1 instance on a QX6700 B3 @ 3.6Ghz 400FSB x 9 Multi Ram = 400Mhz 4-5-5-15

I was running 4 instances, with each tied to one core only so we can say 318.4677 x 4 = 1273.87 KPTS/sec

Screenshot Image Hosted by ImageShack.us
Shot at 2007-07-15
Stephen Brooks
2007-07-16 11:07:25
3.6GHz quad-core!  Not bad, I hear they only got up to 4.4GHz on liquid nitrogen with the QX6700.
Stephen Brooks
2007-07-16 11:08:40
I shouldn't make a new graph right this instant because we're about to go over a multiple of 50 posts and start a new page.  I'll post one when we're back up the top again.
[TA]Assimilator1
2007-07-17 19:10:48
[XS]riptide
Awesome rig!  ,& you've breached the graphs max limit too

I'm still tweaking about with my main rig atm,with my E6420 @3.2GHz ,400MHz FSB ,RAM 400MHz 4-4-4-15 ,I'm getting 291Kpts/s (1 client 1 core) ,at 3.25GHz ,406MHz FSB,380MHz RAM 4-4-4-15 ,I get about 295 Kpts/s.
My aim is to get my rig stable at 406/406 MHz & then hopefully break the 300Kpts/s/core thus beating [XS]riptides E6600 score! 
That's if I can get my damn rig stable at 406/406 ,so far I haven't been able to but I have a couple of tweaks left to try before I call it quits.
Stephen - don't worry about adding those scores to a new graph as I've only tested them for a few hours or so, & due to further tweaking they aren't final scores either.
: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 25163626 accesses.