Message boards : Number crunching : Overclocking Failed
Message board moderation

To post messages, you must log in.

AuthorMessage
NOJAVA

Send message
Joined: 5 May 06
Posts: 11
Credit: 1,191
RAC: 0
Message 13512 - Posted: 5 May 2006, 21:03:48 UTC

I join LHC@HOME, a friend of mine, Meskine Mohamed
tell to me is a good project,i go on LHC site and read everything.it is terrific that so small particule can madeterrific energy !!!
i read also other thread and see one about overclocking failed
My idea is LHC@home ask us something.
If result of WU are always "false", change project and don't use LHC@home. i found a very small programme called " FSB ", it can made overcloking
under windows interface without any knoledge

But overclocking is very dangerous ( i know what i say ) because you overclock everything CPU, Chipset,
Memory, temperature increase and performance decrease
you also have constant false result in memory
so wu result will be automaticly false.
Lhc@home describe it in site.

i have found another problem due to overclocking
i have bought a 512Mb stick memory, i don't say
wich it is, but it is a PC3200 400Mhz memory.
when i reboot my bios say overclocking failed.
It means it is a " false memory ", memory turn realy
at 333 Mhz, it came from China.
Internet prevent from overclocking RAM memory made in China. So you must enter your Bios at boot,
and decrease speed of your memory , perhaps from 400 Mhz to 333 or to 266 or 200.
if you never have overclok your processor and have always false result in your WU. it surely come because of your RAM and not your CPU.
Advanced users also remove " BIOS ECC DRAM configuration from enable to disable " to increase speed, don't make it and see result.
Joseph yours
bye
ID: 13512 · Report as offensive     Reply Quote
Osku87

Send message
Joined: 2 Nov 05
Posts: 21
Credit: 105,075
RAC: 0
Message 13515 - Posted: 5 May 2006, 23:04:12 UTC

I can't see your point in here, but...

But overclocking is very dangerous ( i know what i say ) because you overclock everything CPU, Chipset,
Memory, temperature increase and performance decrease

Overclocking is safe when you know what you are doing. If you overclock your CPU by changing FSB you may or may not (depends on your hardware) overclock your memory and chipset. You are right that temperature increases. If you do it right performance won't decrease instead it will increase.

you also have constant false result in memory
so wu result will be automaticly false.
Lhc@home describe it in site.

If you point a thing you could also link to it. I think sensible overclocking won't return false results.

Internet prevent from overclocking RAM

I would like to know how Internet prevents from overclocking.

but it is a PC3200 400Mhz memory.
when i reboot my bios say overclocking failed.

Usually this is from processor and it may appear rarely also with standard CPU clock. You may also have bought bad memory if problem came after installing the new RAM.
ID: 13515 · Report as offensive     Reply Quote
Travis DJ

Send message
Joined: 29 Sep 04
Posts: 196
Credit: 207,040
RAC: 0
Message 13521 - Posted: 6 May 2006, 16:29:00 UTC - in response to Message 13512.  
Last modified: 6 May 2006, 16:30:04 UTC

I got a few things from your post:

1) Overclocking and LHC don't work well with each other unless you do extensive testing prior to actually running LHC@Home on an OC system. The floating point math that Sixtrack does is sensitive to every digit to the end of the 80-bit register. One error at anytime during will kill that WU.

2) I'm guessing you have an ASUS motherboard and an AMD processor.. most folk with ASUS, MSI, Biostar (and others) sport an routine which automatically selects the best memory setting given various settings in BIOS. It is possible for a system to set a DDR400 chip to DDR333 if the timings (i.e. 2.5-3-3-6 1T @ DDR333 vs. 3-4-4-8 2T @ DDR400, DDR333 in this case is faster) are better at the "slower" speed. It's this reason DDR400 is "faster" than some DDR2-800 chips. Frequency means nothing when the chips have to waste more cycles waiting before it can do something else. So the memory may not be "false" as you say, rather it works best at a "slower" setting - what BIOS probably means is that after you've been playing with "FSB" in Windows and errors result .. then you reboot .. BIOS knows better and changes its settings back to a more compatible default and gives you an error message.

3) If memory seems to be your problem if you're OC why don't you test it whlie you go and make sure it works to your expectations? Visit http://www.memtest.org/ and download the .iso image of Memtest86+. If you have a fully supported chipset you will be able to test your ram and alter the memory timings while it's testing so you can quickly find the limits of your memory & motherboard. Again, keep in mind that a slower frequency with tighter timings will get you better results than high frequency with loose timings.

..just my two cents

ID: 13521 · Report as offensive     Reply Quote
Nuadormrac

Send message
Joined: 26 Sep 05
Posts: 85
Credit: 421,130
RAC: 0
Message 13527 - Posted: 8 May 2006, 8:32:02 UTC

When the BIOS downclocks the FSB below the rated setting following someone having pushed an OC beyond what the hardware was capable, it doesn't necessarily mean the memory should be run at it's defaults. Many times, the BIOS sets it to some fail safe value following a crash from excessive OC, not an optimum. The fail safe can represent, a "gaurenteed", it will boot... From there, the user can end up having to go back in, or actually they find themself in there with the error message, and an opportunity to re-set the CMOS...

Travis is correct however, that if enough waite states get introduced, that the actual performance might not be improved...

Also, an OC doesn't guarentee that there will be errors, but the possibility can be greater. It isn't an all the time thing. What's more, at least in the past, Intel for one would sometimes underclock the proc upon shipping. By this, I mean that if Intel had too many Pentium 133s, and they didn't want to lower the price (aka supply and demand), they might have re-packaged some of the Pentium 133s as Pentium 100s (even though through internal testing they were rated for the higher clock). Doing this, they could in effect reduce the supply of Pentium 133s, to help keep costs higher...

Regardless however, and hopefully to de-mystify this somewhat, what can really go on is this. The timing crystal does not give a perfect/steady clock rate, but from one clock pulse to the next can very slightly. In fact, with an A64 (which already has CnQ to allow the clock to very under useage), this can be seen even when CnQ is disabled. Looking at my own A64, it was not uncommon to see an occasional clock that was either higher or lower then the rated clock, by exactly the multiplier (aka 1 MHz faster, or 1 MHz slower on the HTT).

Given this fact, that in the real world (vs. on paper), where things can very slightly, manufacturers will routinely put a little timing margin into their products. This is to assure, that even when one runs into the occassional "faster then typical clock pulse", all will still run as it should.

Because of this, there can also be room to over-clock, though one's results can very (aka the luck of the draw, as some have sometimes gotten a better OCing CPU, memory, etc, and some have been less lucky). Course there are no gaurentees here... As long as one maintains a degree of timing margin, while one's running under the most stressful conditions (which is why many test with Prime 95 for instance), one should be OK. And BTW, CPDN can be much more stressful on a CPU from many people's experience then LHC, or many other projects. In fact CPDN is where many users have been most prone to run into problems...

If however, one's run out of timing margin or doesn't have any, and they get that occassional faster clock pulse, then a timing problem can result, and yes an error. It's not that OCing gaurentees an error, but having a timing problem (aka running faster then the hardware is capable, even for that random instant) can result in one. Reason it can also be good to back off when one finds their hardware's limit, to make sure some margin is left in the system...

As to temps, this is true, but there are ways to deal with it also. People who are serious about OCing, aren't necessarily going to use one of those thermal pads, but might prefer something like Arctic Silver 5. The heat sinks, and attention to one's cooling one might give their system can also be greater...
ID: 13527 · Report as offensive     Reply Quote
Travis DJ

Send message
Joined: 29 Sep 04
Posts: 196
Credit: 207,040
RAC: 0
Message 13532 - Posted: 8 May 2006, 22:26:00 UTC - in response to Message 13527.  

Son Goku--

To elaborate more about CPDN: It's actually more memory intensive, but it really depends on the size of the CPU L1/L2 cache. The hadsm/cm/clm programs do extensive table lookups - Windows' performance monitor will reflect a large difference between page lookups & faults between sixtrack and hadxxx. HADSM seems to perform better on my Pentium-M 1.6 (2MB L2 Cache, DDR333) than the AthlonXP 3200+ (2.2GHz 512K L2 Cache, DDR400) despite the higher frequency of the AthlonXP. That's what I've noticed anyhow. :)
ID: 13532 · Report as offensive     Reply Quote
Travis DJ

Send message
Joined: 29 Sep 04
Posts: 196
Credit: 207,040
RAC: 0
Message 13533 - Posted: 8 May 2006, 22:27:13 UTC - in response to Message 13532.  
Last modified: 8 May 2006, 22:40:52 UTC

Son Goku--

To elaborate more about CPDN: It's actually more memory intensive, but it really depends on the size of the CPU L1/L2 cache. The hadsm/cm/clm programs do extensive table lookups - Windows' performance monitor will reflect a large difference between the total number of page lookups & faults between sixtrack and hadxxx. [edit, was thinking about 2 things at once and made no sense out of something simple]
On my AthlonXP 3200+ there was around (this is off the top of my head, it's been a few months since I ran it last) a 1:6 ratio of lookups between sixtrack and hadxxx.

ID: 13533 · Report as offensive     Reply Quote
Nuadormrac

Send message
Joined: 26 Sep 05
Posts: 85
Credit: 421,130
RAC: 0
Message 13572 - Posted: 11 May 2006, 21:48:43 UTC
Last modified: 11 May 2006, 21:54:42 UTC

Actually, by timing margin, I didn't mean the number of memory lookups in the sense of how many, as I meant a timing problem on the hardware side of things. AKA, if lack of timing margin in one's computer, results in a faster clock pulse (aka lets say one's running their memory bus at 210 MHz, and for 1 clock only it accesses it "runs at" 211 MHz, and then drops back down to 210 MHz... Or put it more directly, if the memory needs 15 ns before it will be ready to be accessed because one's running (either due to overclocking or not) right at the edge, but a particular clock cycle leaves it accessing it in 14 ns, then a problem can result. However, if the memory timing settings in BIOS effectively result in a 15 ns memory access time (typically, or on average), and the memory is capable of being accessed in 12 ns (as well as 14 ns, 15 ns, etc), then the 14 ns "faster clock pulse" won't effect stability any, due to it still having enough timing margin to accomodate that occassionally faster clock pulse... Hopefully, I didn't make things as clear as mud in the above :)

I do s'pose those extra memory ops in CPDN could be a reason why, as far as products go it's almost known as one of the biggest "stress test projects" for one's computer stability. It does almost seem that if something's going to be stable on CPDN, one probably won't have a problem elsewhere...
ID: 13572 · Report as offensive     Reply Quote
Travis DJ

Send message
Joined: 29 Sep 04
Posts: 196
Credit: 207,040
RAC: 0
Message 13574 - Posted: 12 May 2006, 1:26:34 UTC - in response to Message 13572.  

Ahhh.. I read you now.

You're sure right about CPDN being a "Hell's Kitchen" of sorts - I feel sorry for all the North Bridges on Intel-based dual core CPU systems. Bottleneck city.. :(
ID: 13574 · Report as offensive     Reply Quote

Message boards : Number crunching : Overclocking Failed


©2024 CERN