| |
What is PCI Express? A Layman's guide to high speed PCI-E technology
by Lee Penrod
You are encouraged to make links to this article from your website and tell your friends
The following article is based on years of experience. It is provided as a free service to our customers and visitors. However, Directron.com is not responsible for any damage as a result of following any of this advice.
Copying the contents for commercial purposes is strictly prohibited without Directron.com's written consent. However, you are welcome to distribute these computer support tips free to your friends and associates as long as it's not for commercial purposes and you acknowledge the source. You are permitted and encouraged to create links to this page from your own web site.
Introduction
So you want to know about PCI Express? PCI Express is a recent feature addition to many new motherboards. PCI Express support can have a big impact on your hardware choices both now and in the future. This article will explain the topic in plain english withoutboring you with useless information.
Why should I care about PCI-Express?
There are two main reasons to care about PCI-Express: 1) PCI is now an old standard dating back to the early 90's and no longer fits our needs in terms of speed/performance. 2) AGP also is in a similar position as PCI now, and chipset manufacturers are killing AGP motherboard support in favor of the much faster PCI Express interface. This means you are looking at a forced transition in the graphic sector, thus you really don't have a lot of choice in the coming years. At this point (August, 2008) choices in AGP cards represent less than 5% of available cards, and the latest chipsets can not be found in the form factor.
While we've spent plenty of time and energy improving the speed of processors, memory, and other parts of the PC we've done virtually nothing with the main connection betweern many devices-PCI. As such we are stuck with a technology in our PCs and Servers that still runs at the speeds and bandwidth we were comfortable with in the 90's. PCI as we know it is holding us back - it is a bottle neck - a limitation to the maximum performance of our systems.
We all want the most from our PC. To get the most out of our PC we must remove all bottlenecks (obstacles to performance). To that end we must turn to the next best alternative: PCI Express.
Easing into It - All about the Why?
If you have read my guide to the Front Side Bus then you are familiar with the analogy of a PC as being like a city with many road (buses) in it that move cars (data) to any number of destinations. Let's quickly revisit my explanation of PCI from that article:
PCI Bus- The PCI bus connects your expansion cards and drives to your processor and other sub systems. On most systems the bus speed of the PCI bus is 33MHz. If you go higher than that, then cards, drives, and other devices can have problems. The exception to this is found in servers. In some servers you have a special 64-bit (extra wide) 66MHz PCI slots that can accept special high-speed cards. Think of this as a double sized passing lane on a major road that allows higher speed cars to go through.
Now in my previous article I mentioned a special type of PCI (64 bit). The reason 64 bit helps is that it improves the bandwidth of the PCI Bus. Bandwidth, normally expressed in MB per second, is basically a measure of the amount of data that can be pushed through something at one time.
If you ever have sat in your car looking at the back bumper of another car during rush hour then you probably have a good idea of what's going on in the modern PCI Bus. You've got too many cars (data) going through too narrow and too slow a road (pci bus) at one time.
Bandwidth
PCI Express in all it's flavors: 1x, 2x, 4x, 8x, 16x and 32x all have much greater bandwidth than basic PCI.
Common Buses and their Max Bandwidth
PCI 132 MB/s
AGP 8X 2,100 MB/s
PCI Express 1x 250 [500]* MB/s
PCI Express 2x 500 [1000]* MB/s
PCI Express 4x 1000 [2000]* MB/s
PCI Express 8x 2000 [4000]* MB/s
PCI Express 16x 4000 [8000]* MB/s
PCI Express 32x 8000 [16000]* MB/s
IDE (ATA100) 100 MB/s
IDE (ATA133) 133 MB/s
SATA 150 MB/s
Gigabit Ethernet 125 MB/s
IEEE1394B [Firewire] 100 MB/s
* Note - Since PCI Express is a serial based technology, data can be sent over the bus in two directions at once. Normal PCI is Parallel, and as such all data goes in one direction around the loop. Each 1x lane in PCI Express can transmit in both directions at once. In the table the first number is the bandwidth in one direction and the second number is the combined bandwidth in both directions. Also please note that in PCI Express bandwidth is not shared the same way as in PCI, so there is less congestion on the bus.
Increased bandwidth can be equated into increased system performance. We've long known that to get the most out of your processor you need to get as much information into it as possible, as quickly as possible. Chipset designers have consistently addressed this by increasing Front Side Bus speeds. The problem with this is that front side bus speed increases the speed of transfer between the memory and CPU but often you've got data that's coming from other sources that needs to get to the memory or CPU like drives, network traffic, video, etc. PCI Express addresses this problem head on by making it much faster and easier for data to get around the system.
Physical Differences: PCI Express [ PCI-E ] vs AGP vs PCI
Currently, the most common use for PCI Express is Video. On the graphic at right you can see the physical differences between the cards.
The connector on PCI Express video cards will always start out with a small piece not directly joined with the rest of the slot, and then a notched piece not directly joined with the rest of the slot. This notch goes into the 1x portion of the 16x slot. The characteristic notch makes it easy to tell the difference between PCI-E (PCI Express) and an AGP Video card. As you can imagine, a PCI Express Video card will not fit into an AGP card slot, and a AGP Video card will not fit into a 16x PCI Express card slot.
Another physical difference between PCI Express Cards, AGP, and PCI is the distance between the card's bracket and the start of the connector. On PCI Express cards, there is very little distance between the metal bracket and the start of the connector. On both PCI and AGP the distance is much longer.
PCI Express 1x / 4x cards also have the physical difference in regard to bracket distance. They are both a good deal smaller than standard PCI. At this time this type of card is still quite rare so at this point there is little chance of confusing them with something else. The PCI Express 1x connector does however bare some minor resemblance to an AMR slot, so it is important not to confuse the two. No motherboard on the market today currently has both the older AMR slot and PCI Express.
Q&A Common Questions about PCI Express
Q:Is PCI Express Faster Than PCI?
A:PCI Express is much faster than PCI. For 1x Cards it at least 118% faster. When you compare PCI Express video to PCI Video the difference is enormous: PCI Express 16x video is over 29x faster than PCI Video.
Q:Is PCI Express Video Faster than AGP Video?
A:Yes and No. A 16x PCI Express connection is at least 190% Faster than AGP 8x but this is the connection between the system and the video card. You use the connection the most when your video card is low on memory or when the game you are using uses a Direct X or Open GL feature that isn't supported in hardware.
So, what this means is that in terms of real world performance there may not be a huge difference between AGP and PCI Express if you are talking about identical chipsets. Unfortunately this is very hard to prove because graphics chipsets are designed either for PCI Express or AGP. If you have a card that is available in both forms then you have a graphics chipset that was designed for PCI Express and has a special bridge chip installed to let it communicate with the AGP bus. The short of this is: if two cards of the same chipset are available in AGP and PCI-E then the PCI-E one will always be faster. On PCI-E you don't have the overhead of the bridge chip so it's faster, and you have the better bandwidth so in intense situations such as high resolution gaming you'll come out on top every time.
The main point here is: If you have a system with AGP on it, it doesn't make sense to upgrade just to get PCI-E video right now. The fastest AGP card to ever come out is likely to be the nVidia 6800GT. If you are at a point where that is too slow then by all means it makes sense to make a complete switch. If your happy with your AGP graphics options, wait until you are ready to upgrade the processor or other components before making the PCI-E switch. For more information on AGP and PCI please see the general FSB guide.
Q:What is SLI?
A:SLI or Scalable Link Interface is a technology that lets you take two identical nVidia based graphics cards *that support SLI* and a motherboard *that supports SLI* to achieve a very high level of video performance. SLI works by splitting the rendering of the screen between the two cards- one card renders half, the other card renders the other half. This technique is extremely effective. For instance two 6600GT cards in SLI can do vastly better than a 6800GT or X800 card even though the price is lower for two 6600GT Cards. The downside to this is that SLI is still new and is limited to systems based on AMD 64 / AMD FX Socket 939 processors.
Q:Do I need a special power supply for PCI-E [PCI Express].
A:Yes and no. Although the PCI-E spec calls for a PCI Express power connector, most PCI-E cards don't currently use it. This means that you should only probably worry about this if you are buying bleeding edge PCI Express parts. Card based on the ATI X600, ATI X700, ATI X300, ATI X1300, Nvidia 6600, Nvidia 7600 or Nvidia 7300 series graphic chipsets rarely use the connector. If you are in a situation where you need a PCI Express power connector but the power supply doesn't have one you can always just use a PCI Express Power Adapter that converts a 4-pin molex connector to PCI-E Power.
Conclusion
PCI Express is an exciting advance in the area of computers. Although AGP is now starting to die rapidly, standard PCI is taking/will take longer to die off. Expect to see at least 1 or 2 standard PCI slots along side PCI-E in all motherboards for at least the next 2 years. By that time there will be PCI-E replacements for all common devices such as modems, network cards, raid cards, i/o and more.
If you find this article useful, please create a link to it from your website or tell a friend about it. If you have any comments or suggestions about this article, please email information@directron.us
Last Updated: 8/13/2008
Understanding System Memory and CPU speeds: A layman's guide to the Front Side Bus (FSB)
by Lee Penrod
Introduction
Shopping for a new processor and motherboard can be confusing. Some of the most important terms and concepts regarding system performance are also the hardest to understand. Terms like: System Clock, Quad Pumping, Double Pumping, DDR, FSB, SDRAM, Dual Channel, and QDR make many new builders cringe. In this article I will walk you through some of these important concepts so that you can make a more informed decision when upgrading your current system or building a new one.
Part One: What's a bus
To get anything done with a computer you have to get the information you input to the CPU and then to any attached devices such as cards, displays, and other output devices. Inside the computer itself, this information travels in the form of signals over what is known as a bus. You can think of a bus as a road and the signals as cars. A wide road (bus) can support more cars (signals), and a smaller road (bus) supports less. The cars (signals) on the road (bus) have a speed limit (the bus speed). Although a speed limit can be broken (an overclocked bus) doing so can have adverse effects on the cars (signals).
Going along with this analogy: A computer is like a small city. You do not have just one road, but instead you have several different roads with different names and speeds.
There are three main buses in most computers:
1 PCI Bus- The PCI bus connects your expansion cards and drives to your processor and other sub systems. On most systems the bus speed of the PCI bus is 33MHz. If you go higher than that, then cards, drives, and other devices can have problems. The exception to this is found in servers. In some servers you have a special 64-bit (extra wide) 66MHz PCI slots that can accept special high-speed cards. Think of this as a double sized passing lane on a major road that allows higher cars to go through. For information about PCI Express please see the PCI Express Guide.
2) AGP Bus- The AGP bus connects your video card directly to your memory and processor. It is very high speed compared to standard PCI and has a standard speed of 66MHz. Only one device can be hooked to the AGP bus as it only supports one video card so the speed is better compared to the PCI bus, which has many devices on it at once.
3) Front Side Bus (FSB) - The Front Side Bus is the most important bus to consider when you are talking about the performance of a computer. The FSB connects the processor (CPU) in your computer to the system memory. The faster the FSB is, the faster you can get data to your processor. The faster you get data to the processor, the faster your processor can do work on it. The speed of the front side bus depends on the processor and motherboard chipset you are using as well as the system clock. Read on for more information about the Front Side Bus later in this article.
| Go to Top |
Part Two: The System Clock
The system clock is the actual speed of your FSB with out any enhancements (such as double pumping, or quad pumping) on it. The system clock is also sometimes just called the bus speed. From the system clock your PCI bus speed is determined via the use of a divider and then your AGP bus speed is determined by multiplying the PCI bus speed by 2. The dividers allow you to have a faster speed on your PCI and AGP bus while still allowing for the faster operation of the main FSB. In most systems PCI dividers are set automatically and you can not alter them, however, in newer motherboards geared towards computer enthusiasts -- PCI dividers can sometimes be manually set in order to allow you to raise the System clock higher then its normal rate. The three most common dividers built in to motherboards are: 1/5 (used on a 166MHz system clock), 1/4 (used on a 133MHz system clock), and 1/3 used on a 100MHz system clock. A 1/6 divider is sometimes available for overclocking and future support.
Example: If you have a 166MHz system clock and you set a 1/5 divider in your motherboard's bios then your PCI bus speed would be 166/5 = ~33MHz and your AGP bus speed would be ~33*2 = 66MHz.
Why Isn't My Processor the Right Speed?
An often-misunderstood property of the system clock is its effect on processor speed. You see, a thing called a "CPU Multiplier" determines the speed of a processor in MHz. If you take the multiplier of the processor and multiply it by the system clock speed you get the speed of your processor. Your CPU has its multiplier hard wired in to the chip, and this *normally* cannot be changed. Your system clock is another matter. It can be set on your motherboard by using BIOS or a set of switches on the board itself. This is very important. Most motherboards do not automatically set the system clock for you when you install a new processor. We often get reports from new system builders saying that they received the wrong speed processor when in actuality; the new builder forgot to set the system clock to the right speed for their processor. For a list of standard system clock speeds please see part IV of this article.
Part Three: Double Pumping, Quad Pumping, and DDR
Earlier in this article I compared a bus to a road, and the bus speed to a speed limit. This isn't entirely correct because unlike a standard speed limit in real life you are not talking about miles per hour or kilometers per hour, you are talking about MHz or millions of clock cycles a second. A cycle is easily represented by a sine wave.
Traditional parts with out any enhancements can only send/receive a signal once a cycle. A good example of this as far as memory is concerned is standard SDRAM such as PC133. The traditional approach has been around for a long time and it matched well to the un-enhanced buses like you find on processors such as the Intel Pentium / II / III or AMD K6 series. For these types of systems standard SDRAM made a lot of sense because the memory and the processor both were able to transmit at the same time and the bus speed could be synchronized.
Enter the Present - The Double Pumped bus w/DDR
As time progressed processor and memory manufacturers found ways of improving the number of access times per cycle. With the release of the AMD Athlon Processor the world saw the concept of a "Double Pumped FSB". With a double pumped bus the processor could send and receive a signal from the memory sub system twice a cycle. This was a great idea; however this meant that standard SDRAM memory no longer lined up. Standard SDRAM memory could only send/receive once a cycle. What was created is what is known as a bottleneck -- or an obstacle to maximum performance. Removing the bottleneck required a new and faster type of memory and the memory that filled this gap was DDR memory or Double Data Rate memory.
DDR memory can transmit twice a cycle just like the double pumped bus on an Athlon processor, which means that using it with an Athlon processor creates an optimized situation just like you had before with the traditional system.
Quad Pumping, the P4, and Rambus Memory
When the Pentium 4 came out they introduced a new catch phrase to the market: "Quad Pumped" (also known as QDR). The Pentium 4 FSB can handle 4 signals a cycle. When the P4 was first released motherboards only supported traditional SDRAM accessing once a cycle. As you can imagine, such a combination of single access a cycle memory and a four access a cycle processor gives you a massive bottleneck and greatly reduces the potential performance of the processor. Intel was very quick to adopt the fastest memory technology available: Rambus RIMM memory. Although Rambus memory only accesses twice a cycle like DDR, Rambus memory comes in much higher speeds than DDR. The base speed of the popular Rambus memory at the time was a double pumped 400MHz (800MHz). Although the memory does not handle 4 signals a cycle it does work very well since 400MHz is also the enhanced speed of the standard P4 FSB (4 accesses a cycle x 100MHz). The fact that the memory does two cycles for every cycle that the bus helps makes up for the two signals a cycle difference. It is not as good as true QDR would be but the technology is widely available unlike QDR memory.
Dual channel Technology
Lets say you have a car that can hold 4 people but you've got 8 people to transport across town. What do you do? Well you could take one load of people across town, and then go back and get another load of people (a standard memory system) or if money was no object you could simply buy another car and have the other half of the people follow you across town in the other car (a Dual channel memory bus). With dual channel technology you use two memory modules at once to further enhance performance. This essentially doubles the number of signals a second you can handle and doubles your bandwidth (volume of information that can be transferred at once). Point Blank: Dual channel technology increases memory performance but it costs more money because you have to buy memory modules in pairs. Dual channel technology also costs more because the motherboard has to support it in the chipset and a chipset that supports dual channel technology costs more due to the higher complexity of the memory bus. Higher motherboard cost + higher memory cost = higher overall system cost.
Dual channel Rambus has been around for a long time but Dual Channel DDR technology is just now hitting the scene in mass. Since DDR memory is cheaper than Rambus memory and more widely available, Dual Channel DDR should be a good option for the P4 processor but the problem is that as of this writing no consumer level chipset supporting Dual Channel DDR exists for the Pentium 4. (Dual channel DDR is widely available for the AMD Athlon XP series of processors via the nforce2 chipset by nVidia,) When Dual Channel DDR solutions emerge for the Pentium 4 they will quickly become the best price vs performance ratio on the P4 side.
Update [Sept. 2007]: At this point, pretty much any motherboard you buy is going to have either Dual Channel DDR, DDR2, or DDR3. Most but not all Dual Channel supporting motherboards will operate in a slower single channel mode if using one stick of ram, but work best with ram in pairs. On these boards its best not to try the odd 3 stick of ram configuration though as some motherboards will have problems operating in this fasion or plain won't POST.
It's worth noting that at this point, RAMBUS is no longer widely available nor is it supported by any curent motherboard chipset. Those with Rambus based systems are strongly encouraged to upgrade to systems utilizing DDR2 or DDR3. As far as which to go with, [DDR2 vs DDR3] right now DDR2 is much much more common than DDR3 and cheaper. DDR3 will probably start becoming common sometime in 2008 when more motherboard chipsets come out with full support for it.
| Go to Top |
Part IV: The System Clock, the Front Side Bus, and Overclocking
Now that you understand the performance enhancements in the FSB of a processor it is important that you understand how to figure out the processor multiplier and the proper system clock. When you go to purchase a processor you are told in the ad / description for the processor what FSB it has. To determine the proper system clock for the processor simply divide the FSB by the performance enhancer (2 for the double pumped bus on AMD Athlon XP/Thunderbird/Duron processors or 4 for the quad pumped bus on the Intel Pentium 4).
If your processor has a ... FSB then the system clock speed should be:
66MHz (Various Celeron and older): 66MHz clock
100MHz (Pentium II / Pentium III / K6): 100MHz clock
133MHz (Pentium II / Pentium III / K6): 133MHz clock
200MHz (Athlon, Duron, Thunderbird): 100MHz clock
266MHz (Thunderbird, XP): 133MHz clock
333MHz (XP): 166MHz clock
400MHz (Pentium 4): 100MHz clock
400MHz (AMD XP): 200MHz clock
533MHz (Pentium 4): 133MHz clock
800MHz (Pentium 4): 200MHz clock
800MHz (AMD64): 200MHz clock
1066MHz (Pentium 4/LGA775): 266MHz clock
1333MHz (Pentium 4/LGA775): 333MHz clock
Now, remember what I said about the processor multiplier earlier in this article? (Processor speed = processor multiplier x system clock)
If you do not know the multiplier for your processor simply take the proper system clock speed for it and divide that into the rated processor speed and then round the dividend to the nearest .5. Examples: The Pentium4 3.06GHz processor has a FSB of 533MHz. Its system clock is 533 / 4 = ~133. The multiplier is 3,060 / 133 = ~23.
The AMD Athlon XP2700+ has a main clock speed of 2.17GHz and a FSB of 333MHz. Its system clock is 333 / 2 = ~166MHz. The multiplier is 2,170 / 166 = ~13
Underclocking and Overclocking
Underclocking or the act of running a processor or device at under its rated speed is accomplished by simply running the device at a lower bus speed (or if possible a lower multiplier). Most underclocking is done by accident by new system builders. Most motherboards come defaulted to the lowest system clock speed that the motherboard supports. Since the system clock speed is usually not automatically set by the processor you put into the board, this means that if you put a processor with a higher bus speed than the lowest one the board supports, you are underclocking the processor.
Example: Lets say I buy an AMD Athlon XP2400+ processor with a FSB of 266MHz. (XP2400 has a clock speed of ~2000MHz). If I do not set the system clock to 133MHz then I get the processors multiplier (15) times the default bus speed (100). This gives me the wrong processor speed (1500MHz) and the motherboard will either tell me I have a 1,500MHz thunderbird processor, or a XP1700+ processor. Changing the system clock in bios to 133 will make the motherboard detect the processor properly and give me the right processor speed.
Overclocking or the act of running a processor or device higher then its rated speed is accomplished by increasing the system clock (or if possible the multiplier). The biggest issue with overclocking is keeping your PCI bus close to its speed limit (33MHz). Since a divider of your system clock determines your PCI bus, you not only affect your processor when you increase it, but also other parts of the system. Devices attached to the PCI bus are much less over clocking friendly then either memory or a CPU. When you overclock a processor using the system clock your processor speed is determined in the same way as one would for finding normal clock speed: processor multiplier x system clock = processor speed.
Example: An Athlon XP1800+ (1.53GHz) processor with a FSB of 266 and its system clock overclocked to 145MHz would give you a speed of ~1.67GHz and cause the board to detect the processor as a XP2000+.
Part V: Summary and Conclusion
When you are choosing and installing components in a system you should now know how to properly set the system clock in order to achieve the full potential of the system. You should also now understand more about matching memory with a processor. Go with a motherboard/system that complements your CPU and provides it with memory support that well matches the FSB potential. Slower memory technologies such as PC133 SDRAM do not work well with current processors such as the Pentium 4 and Athlon XP. Although synchronizing the memory speed and the FSB speed is best it is OK to use memory that is faster then the FSB of your processor provided that the motherboard supports it.
| Go to Top |
Updates
Since this guide was written a few new technologies have emerged such as DDR2 memory and 64 bit processors for Desktop. Here are a few additional pieces of information about these technologies:
DDR2 Short and Sweet
There are a few major things you need to know about DDR2 when building a system:
Basic Functionality: DDR2 memory has a different approach to design at the chip level then DDR. The simplest way to understand how it works would be to think that at the low level it had two chips of half the stated memory speed working in tandem together to achieve the full speed stated. So for DDR2 400 it would be something like 2 chips of DDR200 working together to achieve the full 400 speed. Notice that I say "chips" not sticks of memory. All this happens on 1 stick of memory.
The overall effect of this trickery is that manufacturers can scale up the speed of the memory beyond the limits of DDR, with only taking a small hit to the timing of the memory ( how long it takes for the memory to respond back to a request ).
This means that it's possible, and expected to see memory speeds of 533MHz or higher for DDR2. In fact, the current concensus is that if you want to build a system, and have a choice between a motherboard with DDR1 and DDR2 then to see a benefit to DDR2 you need to at least get one speed grade higher than that of the max normal speed of DDR1. ( 400MHz ). This is because the timings ( latency ) of DDR2 are worse than DDR. Essentially in most situation DDR400 ( especially low latency DDR400) is faster than DDR2 400. However, when you get to DDR2 533 the speed boost makes up for the slower timings.
As far as matching FSB to DDR2 speed my recommendations are to skip DDR2 400 and opt for going with the following:
800MHz FSB = DDR2 533MHz ( Ideal ) or DDR2 400MHz ( Matched but Slow. ) 1066MHz FSB = DDR2 667 ( Good ) or DDR2 533MHz ( Matched )
Generally you want to keep the system clock of your memory matching with the root clock of your memory or one step above. So the system clock on a 800MHz FSB P4 is 200 (quad pumped) so that matches DDR2 400 (essentially 200 unimproved) or is good with 1 step up DDR2 533MHz (essentially 266 unimproved). Note however that if you only had a 800MHz FSB processor then DDR2 667 really probably isn't going to help much. Once you pass the 1 step above mark on the memory you have diminishing returns unless you can get to double (DDR2 800MHz).
Compatability: Generally a motherboard is only going to accept DDR1 or DDR2 not both. The slots are physically different and have a different number of pins, however people have been known to force memory into the wrong slots ( And that ends in horrible results! ). Be careful when installing it and make sure the motherboard takes that kind of memory before attempting.
At the time of this writing only motherboards for Pentium4 or Xeon accepted DDR2. AMD Socket 754 and Socket 939 motherboards can't accept DDR2 due to the integrated memory controller in the CPU. AMD is making a line of CPUs that can work with DDR2. They will use new motherboards and have a new socket called M2.
Additional Notes on DDR2: 1) DDR2 is not QDR like I mentioned earlier, the technology is different. 2) DDR2 does give you definate benefits and it is recommended. 3) At the time of this writing ALL motherboards that used DDR2 were Dual Channel Ready. 4) It is not uncommon to hear of problems from people trying to use 3 sticks of DDR2. This stems somewhat from what I mentioned in #3. I recommend either using 1,2 or 4 sticks of DDR2. ( more is ok if you are doing a server but add them in pairs, don't use a odd number of sticks if you can avoid it ).
A few notes on 64 bit
1) You can use 32 bit operating systems with a 64 bit AMD CPU or EMT64 Enabled Pentium 4/Xeon.
2) If you plan to run a 64 bit OS with your 64 bit processor, and are actually going to use 64 bit applications ( not just 32 bit applications in 64 bit OS ) then it is recommended that you double the amount of memory you think you need. So for example if you think you would be comfortable with 512mb of memory, use 1GB. If you wanted 1 GB use 2 GB. [Generally 2GB is fine for most anything].
3) The most common issue with 64 bit CPUs and 64 bit operating systems is that you need all new drivers for your hardware. Often the driver CDs that come with hardware lack the 64 bit driver and you have to download new ones from the web.
What is Dual Core Technology?
Dual core technology refers to two individual microprocessors on a single die cast chip. This is essentially two computer processing units (CPUs) in one. The advantage of a dual core chip is that tasks can be carried out in parallel streams, decreasing processing time. This is referred to as thread-level parallelism (TLP).
TLP is also possible on motherboards that can accommodate two separate CPU dies. When TLP is accomplished in a single CPU through dual core technology, it is called chip-level multiprocessing (CLM).
In dual core CPUs, each microprocessor generally has its own on-board cache, known as Level 1 (L1) cache. L1 cache significantly improves system performance, because it is much faster to access on-chip cache than to use random access memory (RAM). L1 cache is accessed at microprocessor speeds.
Dual core chips also commonly feature secondary shared cache on the CPU, known as Level 2 (L2) cache. Motherboards may also have a cache chip designated as Level 3 (L3) cache. While faster than RAM, L3 cache is slower than cache built into the dual core chip.
Dual core technology has advantages over double-core or twin-core technology. These latter terms refer to two independent CPUs installed on the same motherboard. Dual core chips take up less real estate on the motherboard, have greater cache coherency, and consume less power than two independent CPUs. However, dual core technology also has its drawbacks.
For software to take advantage of dual core architecture, it must be written to utilize parallel threading. Otherwise, the program functions in single-core mode, using just one data stream or one of the built-in microprocessors. Unfortunately, coding for TLP is quite intensive, as interleaving shared data can create errors and slow performance. Because of these and other issues, a dual core processor does not deliver twice the speed of a single-core processor, though there is a significant increase in performance under optimal conditions. Finally, dual core chips run hotter than their single-core cousins.
Whether a dual core processor is right for you will depend on what you plan to use your computer for. If the programs you regularly require are designed for TLP, then you may benefit greatly from a dual core chip. If not, you may be better served by a high-end single-core CPU.
what is a cache ?
cache (pronounced cash) memory is extremely fast memory that is built into a computer’s central processing unit (CPU), or located next to it on a separate chip. The CPU uses cache memory to store instructions that are repeatedly required to run programs, improving overall system speed. The advantage of cache memory is that the CPU does not have to use the motherboard’s system bus for data transfer. Whenever data must be passed through the system bus, the data transfer speed slows to the motherboard’s capability. The CPU can process data much faster by avoiding the bottleneck created by the system bus.
As it happens, once most programs are open and running, they use very few resources. When these resources are kept in cache, programs can operate more quickly and efficiently. All else being equal, cache is so effective in system performance that a computer running a fast CPU with little cache can have lower benchmarks than a system running a somewhat slower CPU with more cache. Cache built into the CPU itself is referred to as Level 1 (L1) cache. Cache that resides on a separate chip next to the CPU is called Level 2 (L2) cache. Some CPUs have both L1 and L2 cache built-in and designate the separate cache chip as Level 3 (L3) cache.
Cache that is built into the CPU is faster than separate cache, running at the speed of the microprocessor itself. However, separate cache is still roughly twice as fast as Random Access Memory (RAM). Cache is more expensive than RAM, but it is well worth getting a CPU and motherboard with built-in cache in order to maximize system performance.
Disk caching applies the same principle to the hard disk that memory caching applies to the CPU. Frequently accessed hard disk data is stored in a separate segment of RAM in order to avoid having to retrieve it from the hard disk over and over. In this case, RAM is faster than the platter technology used in conventional hard disks. This situation will change, however, as hybrid hard disks become ubiquitous. These disks have built-in flash memory caches. Eventually, hard drives will be 100% flash drives, eliminating the need for RAM disk caching, as flash memory is faster than RAM.
What Is a Cache Cleaner?
A cache cleaner is software that can be used when a computer user desires to erase information from his or her personal computer. Good cache cleaner software deletes all traces of the information using government standards. After a cache has been cleaned to standard, files have been completely deleted and are unable to be recovered. This type of software can be useful to someone who keeps detailed financial or otherwise personal records on his computer or shares a single computer with other people.
The type of information that may be targeted by cache cleaners is wide and varied. Typically, saved passwords, Internet browsing histories and autocomplete entries are the things that the average computer user is interested in clearing. A cache cleaner can target anything, including but not limited to: caches, cookies, browsing and search histories, index.dat files, temporary folders and files, visited and typed URLs, run histories, open and save history for documents and recent documents. Quality cache cleaner software can modify all of these aspects of a computer and more.
Technically, the erasing of personal information and details can be done manually. However, many computer users invest in cache cleaner software so that they don't have to learn how to clear information and then perform the task. When users opt to use cache cleaner software, they do so with the knowledge that they can clear a substantial amount of personal information with the pressing of a single button.
Users can choose to clear their caches once or at specific times. Some software allows users to choose a regular interval for cleaning, such as weekly. More concerned computer users may choose to have their caches cleaned as often as possible. For these users, caches can be cleaned every time they start and shut down their computers to safeguard all of their information.
Despite a cache cleaner's ease of use and seemingly complete erasure of details, users may opt to configure their cache cleaner to only modify certain parts of their systems. For example, if a user does not want to erase the login details of a particular website, the user may prompt the cleaner to ignore the website from its sweep. The cleaner can then make the appropriate modifications based on user preferences and skip information that a user considers important to keep on a system. Advanced cleaner software may also allow users to change computer paths so that the detection of remaining files becomes more difficult for an uninvited guest.
What is L1 Cache?
Level 1 or L1 cache is special, very fast memory built into the central processing unit (CPU) to help facilitate computer performance. By loading frequently used bits of data into L1 cache, the computer can process requests faster. Most computers also have L2 and L3 cache, which are slower than L1 cache but faster than Random Access Memory (RAM).
When we request programs or files from a standard platter hard drive, the device must search the internal disks for the information by sliding a head mechanism across the platters, roughly analogous to the way a needle reads a phonograph record. However, in the case of a disk drive, there are multiple platters and the head is magnetic, reading at a very high rate of speed. Nevertheless, the standard hard drive is the slowest storage device on the computer, compact disk drives aside.
We normally think of RAM as being quite fast because it is so much faster than hard drives. RAM is a temporary holding area that becomes active when the computer boots. Computers commonly have 1-4 Gigabytes (GB) of RAM. By loading frequently requested programs, files, pictures and other items into RAM, the computer doesn’t have to search the hard drive(s) to retrieve the information on subsequent requests.
While this is a good strategy, the CPU can work faster than RAM, and to speed things along, you might think of L1, L2 and L3 cache as the go-betweens that anticipate what requests will be made of RAM, holding that data at the ready. When a request comes, the CPU checks L1 cache first, followed by L2 and L3 cache (if present). If the CPU finds the requested data in cache, it’s a cache hit, and if not, it’s a cache miss and RAM is searched next, followed by the hard drive. The goal is to maximize hits and minimize misses that slow performance.
While L1 cache is built into CPUs today, it might also reside alongside the CPU on older PCs. L2 cache can be built into the CPU or present on the motherboard, along with L3 cache. In some cases L3 cache is also being incorporated into the CPU. Unlike RAM, cache is not expandable.
What is L2 Cache?
level 2 or L2 cache is part of a multi-level storage strategy for improving computer performance. The present model uses up to three levels of cache, termed L1, L2 and L3, each bridging the gap between the very fast computer processing unit (CPU) and the much slower random access memory (RAM). While the design is evolving, L1 cache is most often built into the CPU, while L2 cache has typically been built into the motherboard (along with L3 cache, when present). However, some CPUs now incorporate L2 cache as well as L1 cache, and a few even incorporate L3 cache.
The job of CPU cache is to anticipate data requests, so that when the user clicks on a frequently used program, for example, the instructions required to run that program are at the ready, stored in cache. When this happens, the CPU can process the request without delay, drastically improving computer performance. The CPU will check L1 cache first, followed by L2 and L3 cache. If it finds the needed bits of data, this is a cache hit, but if the cache doesn’t anticipate the request, the CPU gets a cache miss, and the data must be pulled from slower RAM or the hard drive which is slower still.
Since it is the job of CPU cache to hold bits of data, you might wonder why there is more than one level of cache. Why have L2 cache at all, much less L3, when you can just make L1 cache bigger?
The answer is that the larger the cache, the longer the latency. Small caches are faster than large caches. To optimize overall performance, the best result is obtained by having the smallest, fastest cache most immediate to the CPU itself, followed by a slightly larger pool of L2 cache, and an even larger pool of L3 cache. The idea is to keep the most frequently used instructions in L1, with L2 cache holding the next most likely needed bits of data, and L3 following suit. If the CPU needs to process a request that isn’t present in L1 cache, it can quickly check L2 cache, then L3.
Cache design is a key strategy in the highly competitive microprocessor market, as it is directly responsible for improved CPU and system performance. Multi-level cache is made from more expensive static RAM (SRAM) chips versus cheaper dynamic RAM (DRAM) chips. DRAM and synchronous DRAM (SDRAM) chips are what we normally refer to simply as RAM. SRAM and SDRAM chips should not be confused.
When looking at new computers check out the amounts of L1, L2 and L3 cache. All else being equal, a system with more CPU cache will perform better, and synchronous cache is faster than asynchronous.
What is L3 Cache?
Level 3 or L3 cache is specialized memory that works hand-in-hand with L1 and L2 cache to improve computer performance. L1, L2 and L3 cache are computer processing unit (CPU) caches, verses other types of caches in the system such as hard disk cache. CPU cache caters to the needs of the microprocessor by anticipating data requests so that processing instructions are provided without delay. CPU cache is faster than random access memory (RAM), and is designed to prevent bottlenecks in performance.
When a request is made of the system the CPU requires instructions for executing that request. The CPU works many times faster than system RAM, so to cut down on delays, L1 cache has bits of data at the ready that it anticipates will be needed. L1 cache is very small, which allows it to be very fast. If the instructions aren’t present in L1 cache, the CPU checks L2, a slightly larger pool of cache, with a little longer latency. With each cache miss it looks to the next level of cache. L3 cache can be far larger than L1 and L2, and even though it’s also slower, it’s still a lot faster than fetching from RAM.
Assuming the needed instructions are found in L3 cache (a cache hit), bits of data might be evicted from L1 cache to hold the new instructions in case they’re needed again. L3 cache can then remove that line of instructions since it now resides in another cache (referred to as exclusive cache), or it might hang on to a copy (referred to as inclusive cache), depending on the design of the CPU.
For example, in November 2008 AMD® released their quad-core Shanghai chip. Each core has its own L1 and L2 caches, but the cores share a common L3 cache. L3 keeps copies of requested items in case a different core makes a subsequent request.
The architecture for multi-level cache continues to evolve. L1 cache used to be external to the CPU, built into the motherboard, but now both L1 and L2 caches are commonly incorporated into the CPU die. L3 cache has typically been built into the motherboard, but some CPU models are already incorporating L3 cache. The advantage of having on-board cache is that it’s faster, more efficient and less expensive than placing separate cache on the motherboard.
Fetching instructions from cache is faster than calling upon system RAM, and a good cache design greatly improves system performance. Cache design and strategy will be different on various motherboards and CPUs, but all else being equal, more cache is better.
What Is CPU Virtualization?
CPU virtualization involves a single CPU acting as if it were two separate CPUs. In effect, this is like running two separate computers on a single physical machine. Perhaps the most common reason for doing this is to run two different operating systems on one machine.
The CPU, or central processing unit, is arguably the most important component of the computer. It is the part of the computer which physically carries out the instructions of the applications which run on the computer. The CPU is often known simply as a chip or microchip.
The way in which the CPU interacts with applications is determined by the computer's operating system. The best known operating systems are Microsoft Windows®, Mac OS® and various open-source systems under the Linux banner. In principle a CPU can only operate one operating system at a time. It is possible to install more then one system on a computer's hard drive, but normally only one can be running at a time.
The aim of CPU virtualization is to make a CPU run in the same way that two separate CPUs would run. A very simplified explanation of how this is done is that virtualization software is set up in a way that it, and it alone, communicates directly with the CPU. Everything else which happens on the computer passes through the software. The software then splits its communications with the rest of the computer as if it were connected to two different CPUs.
One use of CPU virtualization is to allow two different operating systems to run at once. As an example, an Apple computer could use virtualization to run a version of Windows® as well, allowing the user to run Windows®-only applications. Similarly a Linux-based computer could run Windows® through virtualization. It's also possible to use CPU virtualization to run Windows® on a Mac® or Linux PC, or to run Mac OS® and Linux at the same time.
Another benefit of virtualization is to allow a single computer to be used by multiple people at once. This would work by one machine with a CPU running virtualization software, and the machine then connecting to multiple "desks," each with a keyboard, mouse and monitor. Each user would then be running their own copy of the operating system through the same CPU. This set-up is particularly popular in locations such as schools in developing markets where budgets are tight. It works best where the users are mainly running applications with relatively low processing demands such as web browsing and word processing.
CPU virtualization should not be confused with multitasking or hyperthreading. Multitasking is simply the act of running more than one application at a time. Every modern operating system allows this to be done on a single CPU, though technically only one application is dealt with at any particular moment. Hyperthreading is where compatible CPUs can run specially written applications in a way that carries out two actions at the same time.
What is Multitasking?
Multitasking is the act of doing multiple things at once. It is often encouraged among office workers and students, because it is believed that multitasking is more efficient than focusing on a single task at once. Numerous studies on multitasking have been carried out, with mixed results. It would appear that in some cases, multitasking is indeed an effective way to utilize time, while in other instances, the quality of the work suffers as a result of split attention.
The term initially emerged in the tech industry, to describe a computer's single central processing unit performing multiple tasks. Early computers were capable of performing only one function at once, although sometimes very quickly. Later computers were able to run a wide assortment of programs; in fact, your computer is multitasking right now as it runs your web browser and any other programs you might have open, along with the basic programs which start every time you log on to your operating system.
In the late 1990s, people began to use “multitasking” to describe humans, especially in office environments. A secretary might be said to be multitasking when she or he answers phones, responds to emails, generates a report, and edits a form letter simultaneously. The ability of the human mind to focus on multiple tasks at once is rather amazing; the American Psychological Association calls this the “executive control” of the brain. The executive control allows the brain to delegate tasks while skimming material and determining the best way to process it.
While accomplishing multiple things at once appears more efficient on the surface, it can come with hidden costs. Certain complex higher order tasks, for example, demand the full function of the brain; most people wouldn't want brain surgeons multitasking, for example. Insufficient attention can cause errors while multitasking, and switching between content and different media formats can have a detrimental effect as well.
A certain amount of multitasking has become necessary and expected in many industries, and job seekers often list the ability to multitask as a skill on their resumes. Students also find this skill very valuable, since it allows them to take notes while processing lecture information, or work on homework for one course while thinking about another. When you do decide to multitask, make sure to check your work carefully, to ensure that it is of high quality, and consider abandoning multitasking for certain tasks if you notice a decline.
What is Multithreading?
in the world of computing, multithreading is the task of creating a new thread of execution within an existing process rather than starting a new process to begin a function. Essentially, the task of multithreading is intended to make wiser use of computer resources by allowing resources that are already in use to be simultaneously utilized by a slight variant of the same process. The basic concept of multithreading has been around for some time, but gained wider attention as computers became more commonplace during the decade of the 1990’s.
This form of time-division multiplexing creates an environment where a program is configured to allow processes to fork or split into two or more threads of execution. The parallel execution of threads within the same program is often touted as a more efficient use of the resources of the computer system, especially with desktop and laptop systems. By allowing a program to handle multiple tasks with a multithreading model, the system does not have to allow for two separate programs to initiate two separate processes and have to make use of the same files at the same time.
While there are many proponents of multithreading, there are also those that understand the process as being potentially harmful to the task of computing. The time slicing that is inherent in allowing a fork or thread to split off from a running process is thought by some to set up circumstances where there may be some conflict between threads when attempting to share caches or other hardware resources. There is also some concern that the action of multithreading could lower the response time of each single thread in the process, effectively negating any time savings that is generated by the configuration.
However, multithreading remains one of the viable options in computer multitasking. It is not unusual for a processor to allow for both multithreading as well as the creation of new processes to handle various tasks. This allows the end user all the benefits of context switching while still making the best use of available resources.
What is Latency?
DEFINITION - 1) In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages (for example, AT&T), latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency.
The latency assumption seems to be that data should be transmitted instantly between one point and another (that is, with no delay at all). The contributors to network latency include:
2) In a computer system, latency is often used to mean any delay or waiting that increases real or
perceived response time beyond the response time desired. Specific contributors to computer latency include mismatches in data speed between the microprocessor and input/output devices and inadequate data buffers.
Within a computer, latency can be removed or "hidden" by such techniques as prefetching (anticipating the need for data input requests) and multithreading, or using parallelism across multiple execution threads.
3) In 3D simulation, in describing a helmet that provides stereoscopic vision and head tracking, latency is the time between the computer detecting head motion to the time it displays the appropriate image.
What is CPU?
The heart of a computer is the central processing unit or CPU. This device contains all the circuitry that the computer needs to manipulate data and execute instructions. The CPU is amazingly small given the immense amount of circuitry it contains. We have already seen that the circuits of a computer are made of gates. Gates, however are also made of another tiny component called a transistor, and a modern CPU has millions and millions of transistors in its circuitry. The image to the right [Intel 2000] shows just how compact a CPU can be. The CPU is a Pentium III processor for mobile PCs.
The CPU is composed of five basic components: RAM, registers, buses, the ALU, and the Control Unit. Each of these components are pictured in the diagram below. The diagram shows a top view of a simple CPU with 16 bytes of RAM. To better understand the basic components of the CPU, we will consider each one in detail.
* RAM: this component is created from combining latches with a decoder. The latches create circuitry that can remember while the decoder creates a way for individual memory locations to be selected.
* Registers: these components are special memory locations that can be accessed very fast. Three registers are shown: the Instruction Register (IR), the Program Counter (PC), and the Accumulator.
* Buses: these components are the information highway for the CPU. Buses are bundles of tiny wires that carry data between components. The three most important buses are the address, the data, and the control buses.
* ALU: this component is the number cruncher of the CPU. The Arithmetic / Logic Unit performs all the mathematical calculations of the CPU. It is composed of complex circuitry similar to the adder presented in the previous lesson. The ALU, however, can add, subtract, multiply, divide, and perform a host of other calculations on binary numbers.
* Control Unit: this component is responsible for directing the flow of instructions and data within the CPU. The Control Unit is actually built of many other selection circuits such as decoders and multiplexors. In the diagram above, the Decoder and the Multiplexor compose the Control Unit.
In order for a CPU to accomplish meaningful work, it must have two inputs: instructions and data. Instructions tell the CPU what actions need to be performed on the data. We have already seen how data is represented in the computer, but how do we represent instructions? The answer is that we represent instructions with binary codes just like data. In fact, the CPU makes no distinction about the whether it is storing instructions or data in RAM. This concept is called the stored-program concept. Brookshear [1997] explains:
"Early computing devices were not known for their flexibility, as the program that each device executed tended to be built into the control unit as a part of the machine...One approach used to gain flexibility in early electronic computers was to design the control units so they could be conveniently rewired. A breakthrough came with the realization that the program, just like data, can be coded and stored in main memory. If the control unit is designed to extract the program from memory, decode the instructions, and execute them, a computer's program can be changed merely by changing the contents of the computer's memory instead of rewiring the control unit. This stored-program concept has become the standard approach used today. To apply it, a machine is designed to recognize certain bit patterns as representing certain instructions. This collection of instructions along with the coding system is called the machine-language because it defines the means by which we communicate algorithms to the machine."
Thus both inputs to the CPU are stored in memory, and the CPU functions by following a cycle of fetching an instruction, decoding it, and executing it. This process is known as the fetch-decode-execute cycle. The cycle begins when an instruction is transferred from memory to the IR along the data bus. In the IR, the unique bit patterns that make up the machine-language are extracted and sent to the Decoder. This component is responsible for the second step of the cycle, that is, recognizing which operation the bit pattern represents and activating the correct circuitry to perform the operation. Sometimes this involves reading data from memory, storing data in memory, or activating the ALU to perform a mathematical operation. Once the operation is performed, the cycle begins again with the next instruction. The CPU always knows where to find the next instruction because the Program Counter holds the address of the current instruction. Each time an instruction is completed, the program counter is advanced by one memory location.
About your Asus Motherboards
Standard Configuration
for Asus Motherboards
P6T Deluxe V2
Maximum RAM: 24GB
Standard RAM: 0MB
Fixed RAM: 0MB
Speed of RAM: PC3-10600
# Banks: 2
# Sockets: 6
Chipset: Intel X58
What type of memory do I need?
Your Asus Motherboards uses PC3-10600 type memory.
What is the maximum amount of memory that I can add?
Your Asus Motherboards can support up to 24GB of memory. For optimal performance install the maximum amount of memory in each socket.
How much memory does my system have now?
Your system comes standard with 0MB of RAM. If you have upgraded your system then you may have a different amount.
What is a memory kit?
A memory kit is comprised of 2 memory modules. For example, a 1GB memory kit has two 512MB memory modules. This is abbreviated as (2x512MB).
Does my system require a kit?
Yes. Your Asus Motherboards requires memory to be installed as a kit because it requires memory to be installed in pairs within a bank. To make it easier for you to install memory that will be compatible, the memory modules available for purchase above are sold in kits.
*
What are banks and sockets?
A bank is a group of memory sockets. A socket is where a memory module is inserted. A bank can have one or more sockets.
One or more of the sockets in your system is already filled with memory. When you upgrade your system, you can either add memory to one of the open sockets and/or remove memory from a filled socket and replace it with a higher capacity memory module.
*
How many banks and sockets does my Asus have?
Your Asus Motherboards P6T Deluxe V2 has 2 banks of 3 sockets each for a total of 6 memory sockets.
Bank 1
Sockets 1 - 3
Bank 2
Sockets 4 - 6
*
Is there anything else I should consider prior to upgrading my Asus Motherboards memory?
- *We recommend you upgrade your computer to the latest BIOS revision prior to upgrading your memory.
- 24GB max with the release of 4GB modules.
- Modules must be installed in groups of 3 for triple channel performance.
- Must have a 64-bit operating system to support above 4GB.
- Purchase and install in pairs for dual channel performance.
- Refer to the system user guide for the proper installation of dual channel and triple channel DIMM configurations.
How do I know that EDGE memory will work with my system?
EDGE tests each RAM module to ensure compatiblity with your Asus system. The memory modules listed above are guaranteed to work in your Asus Motherboards P6T Deluxe V2.
asus p6t7 ws super computer
* 240-pin DDR3 DIMM Banking: 6 (2 banks of 3)
* Chipset: Intel X58
* DDR3 SDRAM Frequencies: PC3-8500, PC3-10600, PC3-12800, PC3-14400 and PC3-16000
* Error Detection Support: ECC and non-ECC
* Graphics Support: Quad PCI Express x16, CrossFire and SLI support
* Max Unbuffered DDR3 SDRAM: 24576MB
* Module Types Supported: Unbuffered only
* Supported DRAM Types: DDR3 SDRAM only
* USB Support: 2.x Compliant
Q: Will my system recognize the maximum upgrade?
A: Possibly
How much memory your Windows OS will recognize depends on which version of Windows you are running. 32-bit versions of Windows will see (and utilize) only 3GB or 3.5GB. To utilize more memory, install a 64-bit version of your OS. More information about OS memory maximums can be found at http://www.crucial.com/kb/answer.aspx?qid=4251.
Q: What memory goes into my computer, and will a faster speed be backward-compatible?
A: DDR3 memory with support for DDR3 PC3-8500,DDR3 PC3-10600,DDR3 PC3-12800 speeds.
Q: How much memory can my computer handle?
A: 24576MB.
Adding the maximum amount of memory will improve performance and help extend the useful life of your system as you run increasingly demanding software applications in the future.
Q: Do I have to install matching pairs?
A: No.
No, you can install modules one at a time, and you can mix different densities of modules in your computer. But if your computer supports dual-channel memory configurations, you should install in identical pairs (preferably in kits) for optimal performance.
Q: Does my computer support dual-channel memory?
A: No.
Your system does not support dual channel.
Q: Does my computer support ECC memory?
A: Yes.
Your system supports ECC. You can put non-ECC modules into an ECC system, but be sure not to mix ECC and non-ECC modules within a system. Install the same type of modules that are already in your system.
P6T
* Intel LGA1366 Platform
* Intel® X58/ ICH10R chipset
* ASUS TurboV
* 3-Way SLI & Quad-GPU CrossFireX Support!
* ASUS Drive Xpert
* ASUS EPU
* ASUS 8+2 Phase Power Design
P6T Deluxe
* Intel LGA1366 Platform
* Intel®X58 chipset
* Triple-channel DDR3 2000(O.C.)/1866(O.C.)/1800(O.C.)/1600(O.C.)/1333/1066 Memory
* True16+2 phase Power Design
* ASUS TurboV
* ASUS EPU
* ASUS Express Gate SSD
* SAS Onboard
* SLI and CrosFireX on Demand
* 100% High-quality Japan-made Conductive Polymer Capacitors!
VRM 5000hrs lifespan @105°C, 500,000hrs @65°C
P6T Deluxe V2
* Intel LGA1366 Platform
* Intel®X58 chipset
* Triple-channel DDR3 2000(O.C.)/1866(O.C.)/1800(O.C.)/1600(O.C.)/1333/1066 Memory
* True16+2 phase Power Design
* ASUS TurboV
* ASUS EPU
* ASUS Express Gate SSD
* SLI and CrosFireX on Demand
* 100% High-quality Japan-made Conductive Polymer Capacitors!
VRM 5000hrs lifespan @105°C, 500,000hrs @65°C
P6T Deluxe/OC Palm
* Intel LGA1366 Platform
* Intel®X58 chipset
* Triple-channel DDR3 2000(O.C.)/1866(O.C.)/1800(O.C.)/1600(O.C.)/1333/1066 Memory
* True16+2 phase Power Design
* ASUS OC Palm
* ASUS TurboV
* ASUS EPU
* ASUS Express Gate SSD
* SAS Onboard
* SLI and CrosFireX on Demand
* 100% High-quality Japan-made Conductive Polymer Capacitors!
VRM 5000hrs lifespan @105°C, 500,000hrs @65°C
P6T SE
* Intel LGA1366 Platform
* Intel® X58/ ICH10R chipset
* ASUS TurboV
* ASUS EPU
* ASUS 8+2 Phase Power Design
P6T WS Professional
* Intel® Core™ i7 Processor Extreme Edition / Core™ i7 Processor / Xeon® 3500 Series Processor
* Intel® X58 / ICH10R
* True 16+2 Power Phase Design
* ASUS EPU 6-Engine & TurboV
* ATI CrossFireX & NVIDIA SLI Support
* 2 onboard SAS ports
* PCI-X Architecture
* G.P. Diagnosis Card Bundled
* ASUS Express Gate
* 100% High-quality Japan-made Conductive Polymer Capacitors!
VRM 5000hrs lifespan @105°C, 500,000hrs @65°C
P6T6 WS Revolution
* Intel® Core™ i7 Processor Extreme Edition / Core™ i7 Processor / Xeon® 3500 Series Processor
* Intel® X58 / ICH10R
* Nvidia® nForce 200
* 6 PCI-E Gen2 x16 IO onboard
* True @16 3-Way SLI
* True 16+2 Power Phase Design
* ASUS EPU 6-Engine & TurboV
* ATI CrossFireX & NVIDIA SLI Support
* 2 onboard SAS ports
* G.P. Diagnosis Card Bundled
* ASUS Express Gate
* 100% High-quality Japan-made Conductive Polymer Capacitors!
VRM 5000hrs lifespan @105°C, 500,000hrs @65°C
Rampage II Extreme
* Intel® Core™i7 Processor Ready
* Intel® X58/ICH10R
* Triple-channel, DDR3 2000(O.C) Support
* TweakIt
* ProbeIt
* Extreme Engine with ML Cap Design
* SLI/CrossFire On-demand
* SupremeFX X-Fi
* BIOS Flashback
Rampage II GENE
* Intel® Core™i7 Processor Ready
* Intel® X58/ICH10R
* Triple-channel, DDR3 2000(O.C.) Support
* MemOK!
* CPU Level Up
* SupremeFX X-Fi built-in
* Pair X58 with SLI™ / CrossFireX™ in SFF system
workstation
Computer intended for use by one person, but with a much faster processor and more memory than an ordinary personal computer. Workstations are designed for powerful business applications that do large numbers of calculations or require high-speed graphical displays; the requirements of CAD/CAM systems were one reason for their initial development. Because of their need for computing power, they are often based on RISC processors and generally use UNIX as their operating system. An early workstation was introduced in 1987 by Sun Microsystems; workstations introduced in 1988 from Apollo, Ardent, and Stellar were aimed at 3D graphics applications. The term workstation is also sometimes used to mean a personal computer connected to a mainframe computer, to distinguish it from "dumb" display terminals with limited applications.
(1) A high-performance, single-user computer typically used for graphics, CAD, software development and scientific applications. A workstation may be a RISC-based computer that runs under some version of Unix or Linux, the major vendors being Sun, HP, IBM and SGI. It may also refer to a high-end PC using Intel or AMD CPUs from any PC vendor. In all cases, the term implies a machine with a fast CPU and large amounts of memory and disk that is geared toward the professional user rather than the consumer.
(2) A terminal or desktop computer in a network. In this context, workstation is just a generic term for a user's machine (client machine) in contrast to a "server" or "mainframe."
(3) In the telecom industry, a combined telephone and computer.
Workstations
For years, workstations like these from Sun, Compaq and SGI (top to bottom) were used for CAD, medical imaging and scientific visualization. Combined with high-resolution monitors, they were traditionally Unix based and pushed the performance envelope. However, today's Windows PCs and Macs are much more powerful than all the earlier Unix workstations. (Images courtesy of Sun, Compaq and SGI.)
Download Computer Desktop Encyclopedia to your iPhone/iTouch
Small Business Encyclopedia:
Workstation
Top
Home > Library > Business & Finance > Small Business Encyclopedia
Workstation is a general term used to describe two different types of computer systems. At its most sophisticated, a workstation is a high-end, typically expensive, computer used for computer-aided design (CAD), computer-aid engineering (CAE), graphics, simulation, and other applications requiring significant computing resources. At its most basic, a workstation is any personal computer used for business, professional, home, or recreational purposes. A workstation typically includes a combination of a mouse, keyboard, monitor, and central processing unit. It may also include peripheral devices such as a modem, scanner, or printer. In a business setting, a workstation PC is often linked with other computers to a local area network (LAN), which enables it to use the resources of other larger computers in the LAN. A PC, if it has its own hard drive for storage and its own applications installed on it, can be used independently even if it is part of a network.
Workplace Workstations: Potential for Injury
PCs are a fixture in any business, large or small, and are used for word processing, data entry, and other functions. PCs bring with them the expected potential for improved productivity and efficiency. But what some business owners may not realize is that extensive use of PCs by their employees may also result in the employees developing ailments that, in turn, have the potential to lower a business's productivity and increase its healthcare and workers' compensation costs.
The most commonly reported problem associated with computer use is eyestrain. James Sheedy of the University of California—Berkeley estimates that ten million cases of eyestrain are reported each year. As Don Sellers noted in Zap!: How Your Computer Can Hurt You and What You Can Do About It, "The computer is a much more visually demanding environment than people think." To reduce employee eyestrain, employers should adjust lighting to reduce glare on computer screens and encourage workers to take regular breaks to look away from the screen and refocus on a distant object. Employees, particularly those who already wear bifocals, may also want to invest in eyeglasses designed specifically to be worn while working with a computer.
Computer users also face the risk of developing more serious repetitive stress injuries, or cumulative trauma disorders (CTDs). These injuries are disorders of the musculo-nervous system that involve nerve compression and wear and tear on muscles and tendons. The U.S. Department of Labor reports that all CTDs—not just those related to computer use—account for 61 percent of occupational illness cases. The direct cost of a CTD is placed at about $27,500 by the National Council on Compensation Insurance, while the indirect costs may include wages for temporary help, overtime pay, and retraining.
According to Sellers, repetitive stress injuries "could possibly be the most serious effect of using a desktop computer and can be very debilitating." Perhaps the best-known type of repetitive-stress injury is carpal-tunnel syndrome, which is usually related to keyboard use. The syndrome is the result of putting pressure on the nerves that run from the hand to the arm and is characterized by pain and weakness in the hand, arm, and even the shoulder.
An Employee-Friendly Workstation and Environment
Dr. Bruce Bernard of the U.S. National Institute of Occupational Safety and Health encourages employers to evaluate the nature and extent of keyboard use. "Generally," he says, "carpal tunnel syndrome is not found in the workplace unless tendonitis appears there first. You don't want to wait until tendonitis presents itself. You really need to take seriously employee complaints of discomfort." Sellers echoes this approach, urging employers "to examine the workstation environment. Just simply look at how a person is using a workstation: Is he or she comfortable?"
To ensure a comfortable workstation, employers need to be aware of workplace ergonomics, which is the effective and safe interaction between people and things. When reviewing the current work environment, employers should look to see if employees have already made their own adjustments to improve comfort. For example, has an employee placed his monitor on a stack of books, added a cushion to his chair, or placed the legs of his desk on blocks? If so, then clearly the original workstation configuration is not effective. Employers may then consider investing in ergonomically designed furniture and computer accessories that can be adjusted to meet the needs of an individual employee.
Employers should also encourage employees who work extensively with computers to take regular breaks. Marvin Dainoff, director of the Center for Ergonomics Research at Miami University of Ohio, urges employers to "remember that people are not machines." Dainoff also recommends stretching as a means of eliminating musculoskeletal problems. Finally, Bernard and other experts urge employers to create an environment where employees feel they can speak up when they are experiencing any pain or dis-comfort. The sooner a problem is identified, the greater the employer's chance of controlling related costs.
Sun Ultra 27 Workstation
The Right System for Software Development and MCAD
Oracle's Sun Ultra 27 Workstation combines world-class engineering, Intel horsepower, and professional NVIDIA graphics in a package that can easily handle demanding engineering and visualization workloads. It also comes pre-installed with the software developers need to get right to work.
Key Applications
* Software development
* Electronic design automation (EDA)
* Mechanical computer-aided design (MCAD)
* Scientific visualization
AT A GLANCE
* Powered by Intel Xeon processor 3500 series, running up to 3.3GHz
* Supports NVIDIA Quadro FX 5800 graphics accelerator card
* Compatible with more operating systems than any workstation in its class
* Pre-installed with Oracle software development tools
KEY SPECIFICATIONS
* Choice of three quad-core Intel Xeon processors (up to 3.3GHz)
* Choice of four NVIDIA Quadro FX graphics accelerator cards
* Up to four SATA or SAS drives (4TB and 1.8TB maximum, respectively)
* Six expansion slots, including two PCI-Express 2.0 16-lane slots
* Validated for Linux, Oracle Solaris, and Microsoft Windows
More Specifications
|
|