Forums | Albums | Social Groups |
10-11-2010, 01:00 PM | #11 |
Maniac Drummer
Join Date: Feb 2008
Location: Florida
Posts: 3,017
|
How Many Cores Is Too Many?
Source info = http://www.extremetech.com/article2/...2370140,00.asp Ever since the first dual- and quad-core CPUs were released several years ago, the procedure for buying processors has involved counting the number of cores in the CPU, counting the number of dollars in your wallet, and then doing your best to match up the values. After all, chips that haven't been overclocked haven't gotten much faster than 3.3 GHz or 3.4 GHz, so despite a bit of variance here and there, you've generally been able to feel secure that the more cores your CPU contained, the faster it would perform overall. This has been acceptable thinking up until now—after all, it's only this year, with the release of Intel's Core i7-980X and Core i7-970 and AMD's Phenom II X6 series (including the 1090T, the 1055T, and the 1075T), that the number of cores on consumer CPUs has outgrown what you can count using just the fingers on one hand. But technology never stops advancing, and some recent research is showing that current advice to buy the most cores you can afford may not remain practical for much longer. A group of MIT researchers are presenting a paper today at the USENIX Symposium on Operating Systems Design and Implementation in Toronto, titled "An Analysis of Linux Scalability to Many Cores," that details research about how very large numbers of cores affect processing performance. The paper may deal with Linux specifically, but it's a helpful reminder to everyone about the challenges facing hardware and software design in today's computing world—and how important it is that these problems get solved soon. The problem, according to the researchers, starts appearing with systems bearing numbers of cores in the dozens. They built a system in which eight six-core chips effectively mimicked one 48-core chip, then observed what happened in a lengthy series of tests. The large number of cores may have made for a blazing-fast system, but it was still slower than it should have been. The reason? This story from MITnews explains: In a multicore system, multiple cores often perform calculations that involve the same chunk of data. As long as the data is still required by some core, it shouldn't be deleted from memory. So when a core begins to work on the data, it ratchets up a counter stored at a central location, and when it finishes its task, it ratchets the counter down. The counter thus keeps a running tally of the total number of cores using the data. When the tally gets to zero, the operating system knows that it can erase the data, freeing up memory for other procedures. As the number of cores increases, however, tasks that depend on the same data get split up into smaller and smaller chunks. The MIT researchers found that the separate cores were spending so much time ratcheting the counter up and down that they weren't getting nearly enough work done. According to the paper, the problems were often to due to issues with cache: Many scaling problems manifest themselves as delays caused by cache misses when a core uses data that other cores have written. This is the usual symptom both for lock contention and for contention on lock-free mutable data. The details depend on the hardware cache coherence protocol, but the following is typical. Each core has a data cache for its own use. When a core writes data that other cores have cached, the cache coherence protocol forces the write to wait while the protocol finds the cached copies and invalidates them. When a core reads data that another core has just written, the cache coherence protocol doesn't return the data until it finds the cache that holds the modified data, annotates that cache to indicate there is a copy of the data, and fetches the data to the reading core. These operations take about the same time as loading data from off-chip RAM (hundreds of cycles), so sharing mutable data can have a disproportionate effect on performance. The conclusion, according to the MITnews story, is that "[s]lightly rewriting the Linux code so that each core kept a local count, which was only occasionally synchronized with those of the other cores, greatly improved the system's overall performance." But will what works for modern-day Linux, and other operating systems, also work when the number of cores goes up even more? Past 48, according to one of the researchers, "new architectures and operating systems may become necessary." More info = http://www.conceivablytech.com/3166/...p-to-48-cores/
__________________
I am a USAF Veteran and LoveUSA |
10-14-2010, 07:36 AM | #12 |
Maniac Drummer
Join Date: Feb 2008
Location: Florida
Posts: 3,017
|
EVGA code (AR)
Lifetime warranty if you register within 30 days of purchase. http://www.newegg.com/Product/Produc...rder=BESTMATCH NVIDIA PhysX Technology NVIDIA PureVideo HD Technology NVIDIA 2-way SLI Ready NVIDIA 3D Vision Surround Ready NVIDIA CUDA Technology with CUDA C/C++, DirectCompute 5.0 and OpenCL Support Microsoft Windows XP/Vista/7 Support
__________________
I am a USAF Veteran and LoveUSA |
05-24-2012, 09:34 AM | #13 |
Maniac Drummer
Join Date: Feb 2008
Location: Florida
Posts: 3,017
|
For Win7 User to help you with setup
http://www.blackviper.com/service-co...onfigurations/ Win7 Compatibility test http://www.microsoft.com/windows/com...s/default.aspx
__________________
I am a USAF Veteran and LoveUSA |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Newegg on how to build a PC | Doz | How To Fix It | 0 | 06-04-2011 08:16 PM |
How to Build a Raised Garden | Doz | Gardening | 1 | 03-19-2010 08:55 PM |
Powered by vBulletin® Version 3.8.1 Copyright ©2000 - 2024, Jelsoft Enterprises Ltd. |